One side-effect of splitting a program up into multiple processes is that instructions do not inherently have a specific order. One of the most evident places for that to occur is during a videogame. I am sure most gamers have played a game where the controls just felt sluggish and muddy for some inexplicable reason. While there could be a few problems, one likely cause is that your input is not evaluated for a perceivably large amount of time. Chris Blizzard of Mozilla took on this and other issues with multithreaded applications and wrapped it around the concept of Firefox past, present, and future.
Firefox is getting Beta all the time.
One common misconception is that your input is recognized between each frame, which is untrue: many frames could go by before input affects the events on screen. John Carmack in a recent E3 interview discussed about iD measuring up to 100ms worth of frames occurring before a frame occurred which recognized the user’s command. This is often more permissible for games with slower-paced game design where agility is less relevant; if your character would lose to a Yak in a foot race, turns about as quick as one, and takes a hundred bullets to die: you will not notice that you started to dodge a few milliseconds earlier as you would expect to die in either case. In a web browser it is much less dramatic though the same principle is true: the browser is busy doing its many tasks and cannot waste too much time checking if the user has requested something yet. This aspect of performance, along with random hanging, is considered “responsiveness”. Mozilla targets 50 milliseconds (one-twentieth of a second) as the maximum time before Firefox rechecks its state for changes.
Chris Blizzard goes on to discuss how hardware is mostly advancing on the front of increases in parallelism rather than clock speed and other per-thread advancements. GPGPU was not a topic in the blog post leaving the question for the distant future centered on what a multithreaded DOM would look like – valuing the classical multicore over the still budding many-core architectures. Memory usage and crashing were also addressed though this likely was more to dispel the Firefox stereotype of being a memory hog starting later in the Firefox 2 era.
The GPGPU trail is not Mozilla’s roadmap.
The last topic discussed was Sandboxing for security. One advantage of branching off your multiple threads into multiple discrete processes is that you could request that the operating system assign limited rights to individual processes. The concept of limited rights is to prevent one application from exploiting too much permissions for the purpose of forcing your computer to do something undesirable. If you are accepting external data, such as a random website on the internet, you need to make sure that if it can exploit vulnerability in your web browser that it gains as little permission as possible. While it is not a guarantee that external data will be executed with dangerous permission levels: the harder you can make it, the better.
What does our readers think? (Registration not required to comment.)
Admiring the time and energy
Admiring the time and energy you put into your blog and in depth information you offer. It’s awesome to come across a blog every once in a while that isn’t the same outdated rehashed material. Excellent read! I’ve saved your site and I’m adding your RSS feeds to my Google account.