“As far as the laws of mathematics refer to reality, they are not certain, and as far as they are certain, they do not refer to reality.” ( Einstein )

But if we want to work with the  laws of math we must have knowledge about equations.

I have a google doc with the 17 equations that changed the world. See https://docs.google.com/documenst/d/1-yi9X9_Zbrnpr3yKu3YsamTTCYyC6dqKEwZniZKKh3Y/edit#heading=h.ks5xj9eqv6la 

This page is a collection of the most important equations. I try here to explain their derivation and meaning on this page. I also added some basic math necessary to understand the equation calculus.
Observe that I decided to move the




Differential equations

  This youtube  by Zach star tells during the first minute  how important differetial equations are.

To understand how differential equations work and are done I recommend the videos below this.

The power rule is very important in derivation:

Derivation is further described by Khan Academy and 3blue1brown in these 2 videos.



Partial derivative

The term “Partial derivative” has confused me with its new symbol ∂(curly d).

Let say  you have  a function  that depends on  e.g. 2 variables or is a composite function h(x) like h(x) = g(f(x)) with a inner function  f(x)(compare to a nested function in programming). If you want derivate this function, then you have to make 2 partial derivations with the so called chain rule. Every partial derivation is a partial derivative that is denoted with a curly d that has the  character (Unicode: U+2202). It is a stylized cursive/curly d.

A example is   


Chain rule

To write a equation for a 2 frame system, we need the chain rule that tells how to derivate a external function with a inner function.

In calculus, the chain rule is a formula to compute the derivative of a composite function. That is, if f and g are differentiable functions, then the chain rule expresses the derivative of their compositef ∘ g — the function which maps x to — in terms of the derivatives of f and g and the product of functions as follows:

I found 3 applications of this rule: from  NancyPi Youtube

1 example from Khan


Derivation with Leibnitz notation

Derviation with the chain rule with Leibnitz notation is used in the derivation of the Metric tensor in EFE. I found a good teacher in the video below.

For a function

here isf(x)  with a inner function g(x), one can write the chain rule in Lagrange’s notation, as follows:

The chain rule may also be rewritten in Leibniz’s notation in the following way. If a variable Φ depends on the variable x, which itself depends on the variable y (i.e., y and z are dependent variables), thenΦ, via the intermediate variable of x, depends on y as well. In which case, the chain rule states that

(Source: Wiki  )

Here is the chain rule made using Leibnitz notation, explained:

I took this screenshot that describes the method used in the video. Now I can proceed with DrPhysics metric tensor derivation. 🙂


Boltzmann equation


where kB is the Boltzmann constant (also written as simply k) and equal to 1.38065 × 10−23 J/K.


Diracs equation

Dirac managed to unite Einstein’s special relativity theory and the rules of the weird quantum world, with this equation:

or visualized as:

Al-Khalili talks about Dirac and his equation in “everything and nothing” video, from min 37:00:


Elliptic integrals

Ramanujan told: “While asleep, I had an unusual experience. There was a red screen formed by flowing blood, as it were. I was observing it. Suddenly a hand began to write on the screen. I became all attention. That hand wrote a number of elliptic integrals. They stuck to my mind. As soon as I woke up, I committed them to writing.”

In integral calculus, an elliptic integral is one of a number of related functions defined as the value of certain integrals. Originally, they arose in connection with the problem of finding the arc length of an ellipse and were first studied by Giulio Fagnano and Leonhard Euler (c. 1750). Modern mathematics defines an “elliptic integral” as any function f which can be expressed in the form

where R is a rational function of its two arguments, P is a polynomial of degree 3 or 4 with no repeated roots, and c is a constant. ( Wiki )


Newtons second law of rotation

Michio Kaku tells about Newton’s three laws from minute 8:00 in this youtube:


Khan Academy explains it well in this video

The rotational version of Newton’s law $ f=ma$

is $\displaystyle \tau \eqsp I\alpha, \protect$

where $ \alpha\isdeftext \dot{\omega}$ denotes the angular acceleration. As in the previous section, $ \tau $ is torque (tangential force $ f_t$ times a moment arm $ R$ ), and $ I$ is the mass moment of inertia. Thus, the net applied torque $ \tau $ equals the time derivative of angular momentum $ L=I\omega$ , just as force $ f$ equals the time-derivative of linear momentum $ p$ Read more at pccrma.stanford


Heisenberg uncertainty principle

“The world of quantum physics is a world of uncertainty”
“Nature itself is based on uncertainty”
(Carlos Frenk, Durham)

The equation says that the smaller the object observed is the higher uncertainty in momentum,  vector quantity with both magnitude and direction, you have.

Al-Khalili talks about Heisenberg uncertainty principle in his “everything and nothing” video, from min 26:00:


Doc Schuster explains quite well how you derive the Heisenberg equation in this video:



Early study of triangles can be traced to the 2nd millennium BC, in Egyptian mathematics (Rhind Mathematical Papyrus) and Babylonian mathematics. The systematic study of trigonometric functions began in Hellenistic mathematics, reaching India as part of Hellenistic astronomy.[1] In Indian astronomy, the study of trigonometric functions flourished in the Gupta period, especially due to Aryabhata (sixth century CE), who discovered the sine function. During the Middle Ages, the study of trigonometry continued in Islamic mathematics, by mathematicians such as Al-Khwarizmi and Abu al-Wafa. It became an independent discipline in the Islamic world, where all six trigonometric functions were known. (source: Wiki )

The sinus(x) functions can be calculated by a computer program with the use of factorials in the Taylor series. ( named after Brook Taylor who introduced them in 1715)

(  Image source: Wiki  )

Formula was taken from https://www.mathworks.com where a program algorithm is discussed. I wonder who developed this solution.


L functions

 Studying and trying to understand Riemann’s hypothesis, I came up looking at the L functions.

L-functions were introduced by Dirichlet in the early 19th century “to study prime numbers in arithmetic progressions. Twenty years later Riemann took a major step forward by demonstrating that how the study of these L-functions as functions of a complex variable, could hold the key to the deep mysteries about the distribution of prime numbers. In the same paper, Riemann introduced his famous hypothesis which is now considered the ‘holy grail’ of mathematics.” (Read more at https://www.youtube.com/watch?v=A7Q3mLTq8M4 )

“In mathematics, an L-function is a meromorphic function on the complex plane, associated to one out of several categories of mathematical objects. An L-series is a Dirichlet series, usually convergent on a half-plane, that may give rise to an L-function via analytic continuation. The Riemann zeta function is an example of an L-function, and one important result involving L-functions is the Riemann hypothesis and its generalization.”

In mathematics, a Dirichlet series is any series of the form


where s is complex, and   is a complex sequence. It is a special case of general Dirichlet series.

In mathematics, a Dirichlet L-series is a function of the form 

Here χ is a Dirichlet character (In number theoryDirichlet characters are certain arithmetic functions which arise from completely multiplicative characters on the units of . )

and s a complex variable with real part greater than 1. By analytic continuation, this function can be extended to a meromorphic function on the whole complex plane, and is then called a Dirichlet L-function and also denoted L(s, χ).

( Wiki )


The Euler´s identity

that the physicist Richard Feynman called “our jewel”,  is derived from the Taylor series of sin (x) and cos(x) .

Euler's Identity: 'The Most Beautiful Equation' | Live Science

( Image source: www.livescience.com )


The period of the pendulum

Walter Lewin demonstrates in this video, the validity of the pendulum Equation. Read more at www.acs.psu.edu/

derivation as described by www.acs.psu.edu

 By applying Newton’s second law for rotational systems, the equation of motion for the pendulum may be obtained.

Read the derivation
at skill-lync.com  or


Maxwell’s equations

Michio Kaku tells about Maxwell’s equation from minute 18:20 in this youtube:

Maxwell’s equations describe how electric and magnetic fields are generated by chargescurrents, and changes of the fields. An important consequence of the equations is that they demonstrate how fluctuating electric and magnetic fields propagate at a constant speed (c) in a vacuum.

A good presentation is made in Youtube by 2Blue1Brown

Issues in the video:

  • Quantum mechanics requires understanding of waves.
  • The magnetic field is a vector fields
  • Maxwell equations tells how electric and magnetic fields change each others
  • Horizontal polarized waveE=Acos(2π,f,t+Øx) Ι→〉+0Ι↑〉
    where 0Ι↑〉 is the unit vector in vertical direction
  • Vertically polarized wave: E=0 Ι→ + Ay cos(2π,f,t+Øx)
    where 0Ι→〉 is the vector in horizontal direction

where their videos are financed by course payments about €5.54 /month.



Faraday’s law:

A introduction part 1 of electromagnetism is made here:

As Flux density = B =  ϕ/A ( Wbm-2 )

so Flux = field strength per unit area ϕ= BA (unit Wb)

B (unit tesla)

Flux linkage  ( =BAn ) in a coil with N coils

where  s the voltage across the device
Maxwell came up with the right hand rule (or cork screw rule) to find out how magnetic field goes around a piece of wire with a current in it.. Khan academy presents this law here:
Key notations:
 is a surface integral over the surface Σ,
  • Symbols in bold represent vector quantities, and symbols in italics represent scalar quantities, unless otherwise indicated.
  •  the magnetic fieldB, a pseudovector field, each generally having a time and location dependence.

The sources are:

  • the total electric charge density (total charge per unit volume), ρ, and
  • the total electric current density (total current per unit area), J.

The universal constants appearing in the equations (the first two ones explicitly only in the SI units formulation) are:

 In the differential equations,

  • the nabla symbol, denotes the three-dimensional gradient operator, del,
  • the ∇⋅ symbol (pronounced “del dot”) denotes the divergence operator,
  • the ∇× symbol (pronounced “del cross”) denotes the curl operator.

Maxwell-Faraday equation


Laplace equation

Laplace’s equation is a second-order partial differential equation named after Pierre-Simon Laplace who first studied its properties. This is often written as

∇ 2 f = 0 or Δ f = 0 , {\displaystyle \nabla ^{2}\!f=0\qquad {\mbox{or}}\qquad \Delta f=0,}

where Δ = ∇ ⋅ ∇ = ∇ 2 {\displaystyle \Delta =\nabla \cdot \nabla =\nabla ^{2}} is the Laplace operator,[note 1] ∇ ⋅ {\displaystyle \nabla \cdot } is the divergence operator (also symbolized “div”), ∇ {\displaystyle \nabla } is the gradient operator (also symbolized “grad”), and f ( x , y , z ) {\displaystyle f(x,y,z)} is a twice-differentiable real-valued function. The Laplace operator therefore maps a scalar function to another scalar function.

If the right-hand side is specified as a given function, h ( x , y , z ) {\displaystyle h(x,y,z)} , we have

Δ f = h . {\displaystyle \Delta f=h.}

This is called Poisson’s equation, a generalization of Laplace’s equation. Laplace’s equation and Poisson’s equation are the simplest examples of elliptic partial differential equations.

With Laplace equation we get:

Laplace’s tidal equations

the dynamic theory of tides, developed by Pierre-Simon Laplace in 1775,[9] describes the ocean’s real reaction to tidal forces. Laplace’s theory of ocean tides took into account friction, resonance and natural periods of ocean basins. It predicted the large amphidromic systems in the world’s ocean basins and explains the oceanic tides that are actually observed. Laplace obtained these equations by simplifying the fluid dynamic equations, but they can also be derived from energy integrals via Lagrange’s equation. ( Wiki )

For a fluid sheet of average thickness D, the vertical tidal elevation ζ, as well as the horizontal velocity components u and v (in the latitude φ and longitude λ directions, respectively) satisfy  Laplace’s tidal equations:[28]

where Ω is the angular frequency of the planet’s rotation, g is the planet’s gravitational acceleration at the mean ocean surface, a is the planetary radius, and U is the external gravitational tidal-forcing potential. ( Wiki )


Eulers formula

Eulers formula was used by Riemann to derivate his Zeta function described in a separate page.

(  Image source: Wiki  )


A nice presentation of Eulers formula is made by Khan Academy

The Taylor series for the exponential function ex at a = 0 is

(  Image source: Wiki  )


( Image source: Khan video )

( Image source: Khan video )


Euler’s identity 

In this video Mark Newman describes beatifully the relation between sinusoids and Euler number in Eulers formula:

If I am not wrong….

If e= -1

then e= i2

 i sqr(e)


Read more about it in
!EULER’S IDENTITY A MATHEMATICAL PROOF FOR THE EXISTENCE OF GOD! !in https://www.academia.edu/217151/Eulers_Identity


The schrödinger equation derivation

In its simplest form, the Schroedinger equation looks like this

(Source: screenshots from Parth video )

where i  is the imaginary number, thethe square root of -1

  •  ℏ  is related to the Plank’s Constant as
  • Ĥ  is what we call Hamiltonian.
  • ψ represent wave functions in quantum mechanics.
  • the line and pointed bracket around theψ (PSI) is indicating that it is a a quantum state.  The quantum state is a mathematical description of a quantum system,  e.g. a electron near the nucleus.


The scroedinger equation Calculates the probability for a particle (e.g. electron) to be in a certain position. Iit s shared by Brian Green in this World Science Festival video.

presented in one dimension in this image fom Ul Islam from quora:

Unnikrishnan Menon,  derived it for us  in Quora.

I hope he don’t mind I share his loooong derivation here. He writes:

It is all about finding the different energies that a particle can have.

We’re looking at things that can have more than one answer! You might have studied in high school that atoms have energy levels. The Schrödinger’s Wave Equation lets us calculate what these energies are.

Let’s start with thinking what is Kinetic Energy…

It is defined as:

It turns out, talking about velocity isn’t very useful. So, we change this equation to make it depend on momentum p

We can think of particles as waves, at least at tiny scales where we need to use Quantum Mechanics!

To help us move between these 2 ways of thinking about matter, we can use De Broglie’s Equation

where  is the wavelength is the Planck constant is the momentum is the rest mass,   is the velocity and  is the speed of light in a vacuum.”

This theory set the basis of wave mechanics. It was supported by Einstein, confirmed by the electron diffraction experiments of G P Thomson and Davisson and Germer, and generalized by the work of Schrödinger. ( Wiki )

Now we don’t see matter around us in our everyday lives to be behaving like waves because Plank’s Constant h is absolutely tiny! ( h=6.62×10−34m2Kg/s )

But hold on! De Broglie’s Equation is useful when we are dealing with minute particles like protons and electrons 🙂

We also have something called ℏ 
It’s is related to the Plank’s Constant as

Now, it is going to be useful to talk about  and the wave number instead of h and λ


is the wave number. We did to avoid having  running around in our equations. It just turns out to be easier to work with!
We also know that k is change depending upon the energy. Since k depends only on λ , the factors that can affect λ can also affect k . In this case, it is Energy!

We use Wave Functions to encode information about waves. These are just equations with a few special properties. They look something like this…

This is just another way to write sines and cosines. (Waves are just combinations of sines and cosines!)

Here’s the reason for writing this in this weird format…

This type of problem is called an Eigen Value Problem.

is called an Eigen Function.

is an Operator.


Eigen Function of operators are functions which return themselves and a new Eigen Value after the operator is used on them.

As you might have guessed by now, on differentiating this we get the same thing back multiplied by some constants (differential of the argument) 

Let’s do this again!

Here, −k2 is the new Eigen Value returned (that’s the extra bit)

Now let’s make a Kinetic Energy Operator (we want the energy EE and we are assuming a low potential)…

We know there are gonna be a bunch of answers so we are going to use our Wave Function to generate those answers and we also have a way of writing k2

So on putting it all together we get…

Notice it is partial derivative instead of the usual one. This just means that there is more than one variable in the function we treat as a constant when differentiating.

What we just did is valid for a one dimensional particle that is time independent and is not in a potential.

Let’s generalize this equation a little…

For a photon,

We’re now gonna assume it to be true for particles. It turns out to be a good assumption…


We know that there is gonna be a set of energies so we want to know the Eigenvalue equation.

This time we’ll find an E by generating an ω

Let’s look at the wave function to do that…

As you might have guessed it by now, we’re gonna take the derivative with respect to time to pull it out front!


To account for the minus sign(-) we will simply use -i.i=1


And that’s same as the equation we had before…

Please note that this equation is only for one spatial dimension! But we need to consider all 3 dimensions. The three dimensions in Cartesian Coordinate System are orthogonal to each other. That means, they will not interfere with each other.

Each of these contributes to the total energy.

So, in three dimensions, the time dependent wave equation becomes…

And the time independent version becomes…



This can more simplified as…

Where is what we call Nabla.



To include Potential(V) we modify it a bit further like this…



where Ĥ  is what we call Hamiltonian.

is an operator corresponding to the sum of the kinetic energies plus the potential energies for all the particles in the system (this addition is the total energy of the system in most of the cases under analysis ( Wiki )

So, the full Schrödinger’s Wave Equation becomes…


The classical wave function

I shared above The Schrödinger equation derivation.

The classical order wave function is a one dimension (linear) partial derivative function that has no relativity nor quantum mechanics included. 

 Parth shares a very good presentation of it at

Possible solution  of the wave equation as explained by Parth:


general relativity field equation (EFE)

This part is now in this page

Einstein field equation (EFE) tensors

This issue is explained in  my tensor page.


Measuring consciousness level

have developed this equation that uses TMS brain scanning data:

Source RI lecture by Anil Seth.

developed by



Introduction to Information Theory

Edward Witten is presented in this Kyoto video:


I dont know why Edward Witten choose to start with a short introduction to Communication theory (the Shannon theory) at “Theoretical Physics 2018: From Qubits to Spacetime”

I am looking for this youtube.

I presume it is important to know about this, to understand the theory of Quantum mechanics and aspects of General Relativity as he says to continue with at the conference.
I decided to take a look at C.E. Shannons book from 1948 ” A mathematical theory of Communication

Shannon was an American mathematician, electrical engineer, and cryptographer known as “the father of information theory“. He wrote also “Theoretical Genetics.”[12]

A mathematical theory of Communication – Shannon theory
Khan Academy has a great introduction to Shannons theory in this video: (in youtube settings choose your subtitle language).
the video with English text is here.
C.E. Shannons book from 1948 ” A mathematical theory of Communication“. The instructor in this Khan video says among others:
“Claude Elwood Shannon developed theory about cryptography. Then Claude Shannon demonstrated how to generate “English looking” text using  Markov chains.
Bernouilli. Weak law of large number says: As the number of trials increases, the expected ratio value will converge on the actual underlying ratio. He refined the idea of expectation. If observations of all the events will be continued for the entire infinity it will be noticed that “Everything in the world is governed by precise ratios and a constant law of change.”
The binomial distribution appears to be an ideal form as it kept appearing everywhere anytime  you looked at a the variation of a large number of random trials. It seems the average fate of these events are somehow predetermined, known today as  “The central limit theorem”
Most of things in the physical world are clearly dependent on prior  outcomes. We talk about dependent events or dependent variables.
Markov prooved that independent and dependent events can converge on predictable distributions.
One of the most famous application of Markov chains was published by Claude Shannon.”


What Every Physicist Should Know About String Theory

Michio Kaku tells about Newton’s three laws from minute 31:00 in this youtube:

 I listened to  Edward Witten in the Youtube

“What Every Physicist Should Know About String Theory” 

The slide texts are very difficult to read and Witten is very fast and you get little time to think over it.

Here are the slide texts  he presented so you can hopefully read these at your pace and maybe also get it satisfactory translated to your own language:

Slide 1.I am going to try today to explain the minimum that any physicist might want to know about string theory. I will try to explain answers to a couple of basic questions.

  • How does string theory generalize standard quantum field theory?
  • And why does string theory force us to unify General Relativity with the other forces of nature, while standard quantum field theory makes it so difficult to incorporate General Relativity?
  • Why are there no ultraviolet divergences?
  • And what happens to Einstein’s conception of spacetime?

I thought .that explaining these matters is possibly suitable for a session devoted to the centennial of General Relativity.

Slide 2. Anyone who has studied physics is familiar with the fact that while physics – like history – does not precisely repeat itself, it does rhyme, with similar structures at different scales of lengths and energies. We will begin today with one of those rhymes – an analogy between the problem of quantum gravity and the theory of a single particle.

Slide 3 04:32
Even though we do not really understand it, quantum gravity is supposed to be some sort of theory in which, at least from a macroscopic point of view, we average, in a quantum mechanical sense, over all possible spacetime geometries….

(We do not know to what extent this description is valid microscopically.) The averaging is done, in the simplest case, with a weight factor exp(-1) (1 will write this in Euclidean signature) where I is the Einstein-Hilbert action

with R being the curvature scalar and (lambda) the cosmological constant. We could add matter fields, but we don’t seem to have to.

Slide 4 05:20

Let us try to make a theory like this in spacetime dimension 1, rather than 4. There are not many options for a 1-manifold

In contrast to the 4d case, there is no Riemann curvature tensor in 1 dimension so there is no close analog of the Einstein-Hilbert action.

Slide 5.

Even there is no l Integral[(root(g)R] to add to the action, we can still make a nontrivial theory of “quantum gravity,” that is a fluctuating metric tensor, coupled to matter. Let us take the matter to consist of some scalar fields Xi j = 1…..D (matter fields). The most obvious action is

where g = (gut) is a 1×1 metric tensor and I have written ma2 / 2 instead of lambda

Slide 6 (repeating slide 5)

If we introduce the “canonical momentum Pi=dXi/dt

then the “Einstein field equation” is just

In other words, the wavefunction ω(X) should obey the corresponding differential equation

Slide 7 (8:32) 

This is a familiar equation – the relativistic Klein-Gordon equation in D dimensions – but in Euclidean signature. If we want to give this fact a sensible physical interpretation, we should reverse the sign of the action for one of the scalar fields X; so that the action becomes

 Now the equation obeyed by the wavefunction is a Klein-Gordon equation in Lorentz signature:

Slide 8  9:07

So we have found an exactly soluble theory of quantum gravity in one dimension that describes a spin 0 particle of mass m propagating in D-dimensional Minkowski spacetime. Actually, we can replace Minkowski spacetime by any D-dimensional spacetime M with a Lorentz (or Euclidean) signature metric GIJ. the action being then

The equation obeyed by the wavefunction is now a Klein-Gordon equation on our spacetime M:

This is the massive Klein-Gordon in curved spacetime

Slide 9  10:23

Just to make things more familiar, let us go back to the case of flat spacetime, and I will abbreviate G PIPJ, as P2 (To avoid keeping track of some factors of i. I will also write formulas in Euclidean signature.) Let us calculate the amplitude for a particle to start at a point x in spacetime and end at another point y.

Part of the process of evaluating the path integral in a quantum gravity theory is to integrate over the metric on the one-manifold, modulo diffeomorphisms. But up to diffeomorphism, this one-manifold has only one invariant, the total length 1, which we will interpret as the elapsed proper time.

Slide 10 12:25 “Keeping To fixed”

For a given Tao we can take the 1-metric to be just gtt = 1 where
0 <t<T (As a minor shortcut, I will take Euclidean signature on the l-manifold. “as described by Feinman 50 years ago” Now on this l-manifold, we have to integrate over all paths X(t) that start at x at t = 0 and end at y at t = tao.

Slide 11 14:17

For a given tao we can take the l-metric to be just gtt = 1 where 0<=t<=tao (As a minor shortcut. I will take Euclidean signature on the l-manifold.) Now on this l-manifold, we have to integrate over all paths X(t) that start at x att 0 and end at y at t = This is the basic Feynman integral of quantum mechanics with the Hamiltonian being H P2 + m and according to Feynman, the result is the matrix element of exp(-taoxH) (H= Hamiltonian):

But we have to remember to do the “gravitational” part of the path integral, which in the present context means to integrate over

Slide 12 14:17

Thus the complete path integral for our problem – integrating over all metrics Sur(t) and all paths X(t) with the given endpoints, modulo diffeomorphisms – gives

This is the standard Feynman propagator in Euclidean signature, and an analogous derivation in Lorentz signature (for both the spacetime M and the particle worldline) gives the correct Lorentz signature Feynman propagator, with the iE

Slide 12 15:15

So we have interpreted a free particle in D-dimensional spacetime in terms of 1-dimensional quantum gravity. How can we include interactions? There is actually a perfectly natural way to do this. There are not a lot of smooth l-manifolds, but there is a large supply of singular l-manifolds in the form of graphs.

Our “quantum gravity” action makes sense on such a graph. We just take the same action that we used before, summed over all of the line segments that make up the graph

Slide 13 16:02

Now to do the quantum gravity path integral, we have to integrate over all metrics on the graph, up to diffeomorphism. The only invariants are the total lengths or “proper times” of each of the segments:

(I did not label all of them.)

Slide 14 17:14

To integrate over the paths, we just observe that if we specify the positions y….. y4 at all the vertices (and therefore on each end of each line segment)

then the computation we have to do on each line segment is the same as before and gives the Feynman propagator. Integrating over the y; will just impose momentum conservation at vertices, and we arrive at Feynman’s recipe to compute the amplitude attached to a graph: a Feynman propagator for each line, and an integration over all momenta subject to momentum conservation.

Slide 15 19:07

We have arrived at one of nature’s rhymes: if we imitate in one dimension what we would expect to do in D = 4 dimensions to describe quantum gravity, we arrive at something that is certainly important in physics, namely ordinary quantum field theory in a possibly curved spacetime. In the example that I gave, the “ordinary quantum field theory’ is scalar 03 theory, because of the particular matter system we started with and assuming we take the graphs to have cubic vertices. Quartic vertices (for instance) would give 4 theory, and a different matter system would give fields of different spins. So many or maybe all QFT’s in D dimensions can be derived in this sense from quantum gravity in 1 dimension.

Slide 16 19:07
here is actually a much more perfect rhyme if we repeat this in two dimensions, that is for a string instead of a particle. One thing we immediately run into is that a two-manifold E can be curvedthe integral over 2d metrics promises to not be trivial at all




this is a tough issue to deal with. The last theory that mathematicians are working with are the Theory M. I decided to start a new issue “Introduction to Information theory

A pluralist agnostic seeker

Insert math as
Additional settings
Formula color
Text color
Type math using LaTeX
Nothing to preview