Faster websites, more reliable data

Thursday, October 14, 2010 - 03:20 in Mathematics & Economics

Today, visiting almost any major website — checking your Facebook news feed, looking for books on Amazon, bidding for merchandise on eBay — involves querying a database. But the databases that these sites maintain are enormous, and searching them anew every time a new user logs on would be painfully time consuming. To serve up data in a timely fashion, most big sites use a technique called caching. Their servers keep local copies of their most frequently accessed data, which they can send to users without searching the database.But caching has an obvious problem: If any of the data in the database changes, the cached copies have to change too; moreover, any cached data that are in any way dependent on the changed data also have to change. Tracking such data dependencies is a nightmare for programmers, but even when they do their jobs well, problems can arise. For instance,...

Read the whole article on MIT Research

More from MIT Research

Learn more about

Latest Science Newsletter

Get the latest and most popular science news articles of the week in your Inbox! It's free!

Check out our next project, Biology.Net