jeffr_tech (jeffr_tech) wrote,

Interactivity score in ULE

I sometimes speak with Con Kolvis who is known for several Linux schedulers. Con is an interesting fellow because his background is not CS and he is very pragmatic about desktop performance. He doesn't care for the interactivity boost that ULE and previous Linux schedulers use in various forms. He periodically challenges me to consider the interactivity algorithm and whether it is ultimately necessary and effective. Below I present some analysis done when constructing the algorithm in use in ULE and why I believe it is effective and necessary while not suffering many of the pitfalls of earlier approaches.

Firstly, let me define the properties of what I believe is a good interactivity algorithm. These were my guiding principles in creating the ULE algorithm.

1) Any interactivity boost is gained slowly and lost quickly.
2) Interactivity should be harder to achieve the greater the system load.
3) The algorithm should not be exploitable to achieve an unfair share of the CPU.
4) The algorithm should be cheap to maintain and compute.
5) There should be sufficient history to permit bursty applications like web browsers.

The ULE algorithm uses a decaying history of voluntary sleep time and run time. Similar to %cpu, however, involuntary sleep time is not considered. That is to say, threads that are waiting due to contention for CPU resources are not given an interactivity boost for their time waiting. That allows the algorithm to work properly regardless of CPU load where if you only consider %cpu eventually all threads on a busy system will look interactive.

The algorithm scales the ratio of run time to sleep time to a value between 1 and 100. This is quite awkward in the kernel where we can't use floating point math. It decides the divisor depending on which value is larger giving a sort of bimodal distribution.

Here is a graph of what we theoretically would like the score to produce before we switch the divisor around:



And here is a graph generated by running the algorithm with a matrix of inputs:



The second graph uses larger numbers as we do in the kernel to reduce rounding effects. You can see an irregularity at 45 degrees where we switch divisors when the run time exceeds the sleep time. In practice these are never computed as we define a threshold of 20 above which tasks are not considered interactive so there is no point in computing the score when run time exceeds sleep time unless this threshold is moved.

Going from left to right runtime is increasing. From background to foreground sleep time is increasing. A thread would trace a path forward and to the right depending on its behavior. When they increase equally the score quickly reaches an equilibrium well above the threshold for interactive scheduling. A thread looking to abuse the system couldn't use much more than 20% of the cpu in a steady state. This can be adjusted by reducing the interactive threshold. On a busy system this 20% dwindles depending on load, ultimately providing no advantage to a would be exploiter. A thread running right out of the gate raises its score super-linearly to 50 within milliseconds, while a recently awoken thread climbs linearly as it accumulates cpu time.

The algorithm requires a lot of sleep time to be accumulated before a thread can be considered interactive. This remembered sleep time is capped at a few seconds so it only takes a few hundred milliseconds before we discover that a thread is no-longer interactive. It does permit interactive UI applications to wake up with the lowest possible latency since they have a very high priority. If they then abuse this benefit for very long they are scheduled round-robin based on cpu utilization like other bulk tasks. In practice we have picked values that keep desktop user applications interactive as well as is possible.
  • Error

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

  • 7 comments