My problem today is to solve the Friedmann equations, for those who aren't familiar with them, here they are (in my specific case):
(˙aa2)2=ρ1a4−ρ2a6
So, my idea for solving this is to discretise in time, namely, write:
˙a=±√ρ1−ρ2a2
And then
˙a=ai+1−aidt
Now, I want to start with
˙a<0, and make it "bounce". Namely, if you solve these equations analytically, you get:
a(t)=√ρ1t2+ρ2ρ1
Meaning that
a(t) has a minimum value, at
t=0. However, using time discretisation one gets( in the phase when
˙a<0
ai+1=ai−dt√ρ1−ρ2a2i
Which at some point becomes complex.
This procedure has obviously some flaws, how can I correct it? I would like to write an algorithm that solves numerically using some sort of time discretisation, since I later will need to implement for a time variable ρ2(t). The algorithm must reproduce the analytical solution, with the feature that, once we have reached the minimum value for a(t), it stops decreasing and starts increasing.
Any help is appreciated.
This post imported from StackExchange Physics at 2015-03-23 08:53 (UTC), posted by SE-user MrFermiMr