MIT’s new programming language delivers fourfold speedups on big-data problems

MIT-Parallel-Memory_0
Christine Daniloff/MIT

Original press release was issued by MIT, written by Larry Hardesty.

In the age of big data, today’s computer chips are struggling to maintain an efficient method of memory management. The way it currently works is based on what computer scientists call the principle of locality: If a program needs to access a chunk of data stored at some memory bank, chances are, it needs the neighboring chunks as well. Since most programs now require access to just a few data scattered across huge data sets — and having to fetch it time and time again — it results in dramatic slowdown in performance.

Now, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) presented a new programming language, dubbed Milk, that addresses these issues and allows programs written in this language to run four times as fast as those written in existing languages — with room for even greater performance gains. The new language tackles the biggest problem with the today’s big data sets — its “sparseness”. This means, that with big data the scale of the solution does not necessarily increase proportionally with the scale of the problem.

“Imagine an online bookseller with, say, 1,000 customers might like to provide its visitors with a list of its 20 most popular books. It doesn’t follow, however, that an online bookseller with a million customers would want to provide its visitors with a list of its 20,000 most popular books.” explains Saman Amarasinghe, a professor of electrical engineering and computer science.

So where do the game-changing differences lie? With a Milk program, when a computer chip’s core, or processor, discovers that it needs a piece of data, it doesn’t request it — and a cacheful of adjacent data — from main memory. Instead, it adds the data item’s address to a list of locally stored addresses. When the list is long enough, all the chip’s cores pool their lists, group together those addresses that are near each other, and redistribute them to the cores. That way, each core requests only data items that it knows it needs and that can be retrieved efficiently.

Breaking it down in a little more detail: With Milk, a programmer inserts a couple additional lines of code to OpenMP (an extension of languages such as C and Fortran), around any instruction that iterates through a large data collection looking for a comparatively small number of items. Milk’s compiler — the program that converts high-level code into low-level instructions — then figures out how to manage memory accordingly.

“It’s as if [regarding the old technique of going to main memory for a single data item at a time], every time you want a spoonful of cereal, you open the fridge, open the milk carton, pour a spoonful of milk, close the carton, and put it back in the fridge,” says Vladimir Kiriansky, a PhD student in electrical engineering and computer science and first author on the new paper.

As mentioned earlier, the researchers see room for improvements. They mainly target the Milk compiler which not only keeps track of a list of memory addresses, the data stored within, but also decides which addresses should be retained and which discarded. Bettering the algorithm that drives this compiler is where the possibilities of further performance gains might be hidden.

Michal Madaras

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>