Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

When I read that Google installed their own atomic clocks in each datacenter for Spanner, I knew they were doing some real computer science (and probably general relativity?) work: https://www.theverge.com/2012/11/26/3692392/google-spanner-a...


We had hardware for time sync at Google 15 years ago actually, thats not a new thing. Actually.. we had time sync hardware (via GPS) in the early Twitter datacenters as well until it became clear that was impossible to support without getting roof access. =)


Accurate clocks in data centers predates Google. Telecom needed accurate clocks for their time sliced fiber infrastructure. Same for cellular infrastructure.


yeah synchrozing clocks across distributed systems is really hard without expensive hardware


...but for distributed databases specifically, you can use a different algorithm like Calvin[0] or Fauna[1] that do not require external atomic clocks… but the CS point and the wealth of info in research papers (in distributed systems stuff) are solid

...but there is a lot of noise in those software papers, too - you are often disappointed by fine print, or have good curators/thought-leaders [2] - we all should share names ;)

enjoying the discussion though - very timely if you ask me.

-L, author of [1] below.

[0] - The original Calvin paper -

https://cs.yale.edu/homes/thomson/publications/calvin-sigmod...

[1] - How Fauna implements a variation of Calvin -

https://fauna.com/blog/inside-faunas-distributed-transaction...

[2] - A great article about Calvin by Mohammad Roohitavaf - https://www.mydistributed.systems/2020/08/calvin.html?m=1#:~....


I hear this from tech people, but hft people are happily humming along with highly-synchronized clocks (mifid ii requires clocks to be synchronized to 100us). I wouldn't say it's "easy" but apparently if you need it then you do it and it's not that bad.


> (mifid ii requires clocks to be synchronized to 100us)

That only applies if the clocks are within 1ms of each other, so around 100 miles (or equivalently: within a single cloud region), and only came in to force in 2014.

The bound that Spanner-likes keep is ~3ms for datacenters across continents, and that was in 2012.


I wonder if you have to account for localized gravity differences for that kind of thing.


ChatGPT tells me that the time difference between the top of mount everest and sea level is in the nano second range - so completely dwarfed by network latency and maybe doesn't matter?

Pure speculation on my behalf though.


If you were to supply that answer in a group concerned with time and gravity they might ask you if that was the right question to ask wrt "greatest relativistic time seperation between two points on 'the earth's surface'"

ChatGPT likely won't help - but you could look into the fact the the earth isn't round, being an oblate spheroid with 20km less radius at the poles than the equator.

Of course the fact the ideal WGS84 ellipsoid, the official mean global sea level, and the geoid (gravitational surface of equipotential) don't all align must surely come into play here - and that bloody great "gravitational hole" somewhere south of Ceylon.

https://www.e-education.psu.edu/geog862/node/1820

https://en.wikipedia.org/wiki/Gravity_of_Earth

https://en.wikipedia.org/wiki/World_Geodetic_System


Yeap I am way out of my depth here :)


No drama, the greatest difference seen on the walkable|sailable Earth is less than a tenth of sweet bugger all (as you noticed).

I just felt like pointing out a rabbit hole that some might like to dive into!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: