by U.Q. Magnusson


Using Generational Memory Schemes for Big Data
December 29, 2014

There are two worlds.

In the first world of computer science, physical resources are disregarded as irrelevant distractions. In the second world of natural science, physical resources are recognized as brutal reality. One stark example of this is the contrast between thread pool allocations and generational memory schemes.

Most programmers are familiar with thread pool allocations. Multiple flows of control, called threads, are placed into a container, called a pool. Each thread executes in a queue according to the interplay between its relative priority and the pool’s allowable thread count. As defined by the program, each thread allocates a part of the computer’s physical resources: memory, CPU, and I/O. When the thread fulfills its purpose, it dies to make room for the next thread in line.

Simple, right?

No. Not at all. Imagine a world in which each person is granted effective immortality. If the thread never dies, then its resources are never released. This is the hyper-idealized world of computer science, where physical resources are virtualized, commoditized abstractions applied to consistent, continuous patterns. In the natural world, of course, real-time processing is neither consistent nor continuous. It is messy and random. Inputs arrive at odd intervals, space is allocated in discontinuous patterns, and the end is always right around the next corner.

Take, for example, the processing of a financial market data feed. Your program may do several things. It can analyze the data for patterns, capture them in proprietary formats, or simply save them to disk. What all these tasks have in common is that they share a producer-consumer conundrum. If the consumer (your process thread) cannot keep up with the producer (the feed thread), then bad things happen: disconnects, memory faults, etc. Therefore, those disregarded resources of memory and I/O can quickly become ruthless determinants of life and death. Speed kills.

The most obvious way to meet the need for speed is to eliminate I/O bottlenecks. Anytime your program hits the hardware, it slows down. Therefore, whatever you can do in-memory, do it. This is your first optimization.

Speed is the ratio of space over time. By reducing the denominator of time, the ratio of speed increases. In modern computers, multicore processors are de rigueur, and all decent programs now use thread pools to distribute their processing over multiple cores. This divides the total time by the number of cores. More cores equals less time. This is your second optimization.

Your third optimization is the Generational Memory Scheme, or GMS. By optimizing the allocation of memory, the GMS inflates the space numerator, and thus enlarges the speed ratio. It may sound complicated, but the GMS is simple once you understand the reasoning.

To begin, your data must be partitioned appropriately. Something like market quotes is straightforward: simply map each stock symbol to a separate thread. Each symbol’s thread has its own lifecycle; it is born at the first quote, consumes resources as it performs its function, and dies when its purpose is fulfilled. The sand in the Vaseline is this: although there is a limit to the pool’s resources, there is no such limit to the quantity of data it is asked to process.

A typical day’s worth of market quotes can quickly overwhelm even the vastest memory space. For instance, normal trading can see peaks of over a million quotes per second. If your program requires two kilobytes of memory to process each quote, you will see two gigabytes allocated in one second. If your total memory space is only sixteen gigabytes, you will be dead in eight seconds. In the real world, this is called “scarcity of resources”, and you, my friend, have fallen victim to “natural selection”.

The problem is with how the thread pool operates. In a pool lifecycle, data live until their purpose is fulfilled. In a real-time market environment, data don’t know when their purpose is fulfilled. It may be within a few milliseconds; it may be all day long. For instance, Apple Computers (AAPL) never stops quoting; Boiler Room Stock Shop Special (BSSS) stops with the first quote. Best to optimize away BSSS, and keep AAPL.

But how does one optimize the pool in its totality? How does one, in effect, practice natural selection within the pool? You got it. In the natural world, everyone dies. So be it with data.

In a GMS, thread lifespans are dictated by a controller process. This is a singleton object running in its own thread. Like an impersonal god in the sky, the controller stops at periodic intervals to look down into the pool, and to render merciless judgment on its inhabitants. It assesses each thread’s viability with respect to the limited resources of the machine, and turns its thumb up or down.

If a thread is found to be filling its spin buffers, its data are flushed to disk. If its buffers have failed to update within a specified period of time, its data are flushed to disk. If the entire memory space of the pool is pushing the limits of its machine, all the data are flushed to disk. This is significant, because when a thread’s data are flushed to disk, the thread has no more purpose. It is terminated, and all its resources are freed for the next generation of processing.

With each interval of the controller’s judgment, a new generation of threads is born to replace those that died. Thus your program continually recycles itself over time, growing stronger in some places, weaker in others. Evolution in action.

This algorithmic optimization is simple.

There is a controller thread. This has a higher priority than all the pool’s constituent threads. It contains a variety of attributes, like the number of allowable threads in its pool, the ongoing generation count, the maximum amount of memory to be allocated to each thread, an individual thread’s allowance of inactive generations, a running statistical analysis of the pool’s resource allocation, and so on. There is no limit to the possible measurement values held by this controller object.

Underneath the controller is the thread pool itself – in our example, a society of separate symbols like BSSS, AAPL, and all the rest. Each thread is defined by its object attributes, such as last update, total quote count, time-to-live, maximum spread, most active market, and so on. If a thread is found wanting by the controller, it is put on the proverbial ice flow, and its resources are freed for the next generation. Hence the society evolves into a hierarchy for survival.

The GMS is an innovation created in response to the requirements of real-time processing. If this were batch processing, you could simply queue up all the symbols and let them wait their turn. In a real-time environment, you have to react immediately, but you don’t know what to expect.

By now, you may be asking yourself an obvious question: what happens when that peak of 1M messages per second doesn’t abate, and your 16G memory space is effectively gone after eight seconds? This is the beauty of having a controller thread. You can define it to control which pool thread gets temporarily swapped to disk, in effect, to be cryogenically frozen until the society’s resources are capable of processing it.

Another question you may be asking yourself: how many resources does the controller thread consume by itself? Not much, because it is a lightweight singleton. All its attributes are simple data types, and all its methods are static.

Yet another question: how do you optimize the optimizer? How do you define the controller’s attributes of memory usage, generation count, etc.? Personally, I develop everything to be manually definable via runtime parameters. I capture performance logs of different configurations. I make decisions. And when I understand the environment better, I automate that, too.

In summary, the GMS is another of nature’s spin-offs into computer science. Like the A.I. tropes of neural networks and genetic algorithms, there is much to be learned from the real world. The design of functioning organisms, and by extension, viable societies, was not concocted in a laboratory. The natural world’s designs evolve over time, and they withstand the test of time. The Generational Memory Scheme is one more example.

Copyright 2014 U.Q. Magnusson

Back to Blogs


Java | SQL | C | HTML | Blog | Download | Contact Us
Copyright © 2012 UberQueue LLC.