If you had a reason, Mozilla has been compiling Firefox Nightly as a 64-bit application for Windows over the last several months. It is not a build that is designed for the general public; in fact, I believe it is basically only available to make sure that they did not horribly break anything during some arbitrary commit. That might change relatively soon, though.
According to Mozilla's "internal", albeit completely public wiki, the non-profit organization is currently planning to release an official, 64-bit version of Firefox 37. Of course, all targets in Firefox are flexible and, ultimately, it is only done when it is done. If everything goes to schedule, that should be March 31st.
The main advantage is for high-performance applications (although there are some arguments for security, too). One example is if you open numerous tabs, to get Firefox's memory usage up, then attempt to load a Web applications like BananaBread. Last I tried, it will simply not load (unless you clean up memory usage somehow, like restarting the browser). It will run out of memory and just give up. You can see how this would be difficult for higher-end games, video editing utilities, and so forth. This will not be the case when 64-bit comes around.
If you are looking to develop a web app, be sure to check out the 64-bit Firefox Nightly builds. Unless plans change, it looks like you will have even more customers soon. This is unless, of course, you are targeting Mac OSX and Linux, which already have 64-bit binaries available. Also, why are you targeting specific operating systems with a website?
Even thought, as a FF user,
Even thought, as a FF user, I’m happy to hear this news I can’t pass it without mentioning it is 2014 and x86_ 64 came to market in 2003. Mozilla sure took it’s sweet to update it’s obsolete software. But I guess ad least it is free so I should not complain too much.
@ZoA: It’s not that they’ve
@ZoA: It’s not that they’ve taken they’re sweet time to get 64-bit support. They’ve supported 64-bit on Linux and Mac OS X since 2011. They’ve been fighting with issues in Windows for a while and no one has had been able to put forth the time/effort to fix these issues properly yet. You can find some background on the issue at http://www.neowin.net/news/so-where-is-the-64-bit-version-of-firefox-mozilla-gives-us-an-update
FWIW, Chrome just released a
FWIW, Chrome just released a 64-bit version (v37) in August. Internet Explorer has been 32-bit and 64-bit since 2011 (v9). I switched over to IE11 and actually prefer it to Chrome and FF. Each one does things well, but so far the speed of IE11 is impressive.
First, I disagree that it’s
First, I disagree that it's obsolete software. Second, they discussed numerous reasons in the linked Mozilla Wiki page. Basically, it mostly comes down to plugins. Google took the hit for Mozilla in the switch to 64-bit.
"Chrome is doing us a huge favor by setting NPAPI expectations in the market. We can learn from their rollout."
"Assumption: there isn't a huge 'first-mover' advantage with 64 bit. While it offers huge tech. advantages, it's not a consumer feature for most users. It's better to be right than first."
Are you sure that 32 bit FF
Are you sure that 32 bit FF runs out of memory, windows having virtual memory and all. Maybe it just takes up all available RAM allotted to it, and the rest is paged out to virtual memory, the more load beyond available RAM, the more paging, until the system begins thrashing(to much page swapping and not enough actual processing), as the system becomes flooded with page faults. This is not hard to do on systems with not enough RAM, and a slow hard drive. Maybe the 64 bit version of FF is better at suspending its code threads, and making sure it keeps the ratio of its available RAM to code/data size in memory under control, and any memory leaks/other problems to a minimum. Then again Windows could use a little work in this area, also.
Virtual memory is included in
Virtual memory is included in the limit. It doesn’t matter if the data is paged to disk or not. On 64-bit Windows a 32bit app is limited to 2GB unless it is compiled with the LARGEADDRESSAWARE flag in which case it can address up to 4GB. Firefox has the flag, so it is limited to 4GB total.
http://msdn.microsoft.com/en-us/library/aa366778.aspx
I’m not talking about near or
I’m not talking about near or far calls, or the amount of code address space or data address space that limit what can be accessed by a single block of contiguous code in RAM for(32, or 64 bit CPUs)! Programs and data in one application, can be made up of multiple code blocks and data blocks(data is usually the culprit with thrashing), and are able to be larger than even 4GB. Applications can and do include .exe, .obj, .DLL, and others file types(data is also pulled into memory). We are talking about virtual memory, and paging, not maximum code/data block size, and the Immediate assembly language op codes that contain the address offset embedded in the 32/64 bit instruction(near or far type addressing limits). Virtual memory can be as big as the available hard disk/SSD storage. Remember Virtual memory and its system functionality is hidden by the CPU’s hardware from the NON OS system code, the CPU/OS implements the Virtual memory subsystem via privileged instructions, that only the hardware and OS see. The paging files are managed by the hardware and OS, and can be quite large, and single applications can call upon APIs, and whole system assemblies made up of the (*.DLL, .exe etc.), that take up more memory than the (2, and 4 GB single contiguous block of code/data limit you are referring to).
I can easily fire up Blender 3d(and no other applications), and pull in a 2 or 3 million polygon mesh, and perform a entire mesh edit, such as (Mesh relax) and cause my win 7 based laptop, with 8 gigs of memory, to thrash, and the entire system can become so locked up, handling page faults, that it can not even be accessed, forcing me to try to bring up the task manager(it takes about 20 min sometimes) and ending the blinder 3d task, and waiting another 5 or 10 min for the task to be flushed, and the system to return to normal.
Large code and Data blocks in multiple 2GB, 4GB, or more blocks on server SKUs, can, and do, add up to more than the available RAM memory, this is why Virtual memory aware CPUs and SOC have been utilizing a paging file/Vert. Mem. for many years now, most systems, windows, and others OSs, have paging files! Hell if windows did not have paging files, you would need lots more than 4 gigs to run it, those 4 gigs are only being used for code/data Pages that are needed, for those 100+ services/processes, and your applications, you see running in the average PC/Laptop running windows. Windows will automatically set the paging file size its usually 1 1/2 times the RAM size. Any graphics software that utilizes large mesh models, or scenes with lots of meshes, can easily tax 32GB, let alone 8, or 16. Virtual memory has been around since I was in diapers, and I reeked of Clearasil when microprocessors first started implementing Virtual memory.
Take a few Operating System classes, and advanced assembly language classes, you may not need to program in assembly language anymore, except for the most essential OS functions, that require hand optimizing, but there is no better way to lean about CPUs than Assembly language(advanced), and for any programming, Flowcharting is the best before a single line of code is hammered into any IDE, especially if you are building an API, but Flowcharting the logic in pseudocode with a standard flowcharting template would prevent a lot of the bugs you see in software systems today.
If you need a 64 bit Windows
If you need a 64 bit Windows version of Firefox now, use Waterfox.
waterfoxproject.org
Yes but what about AMD based
Yes but what about AMD based systems, with the Intel C++ Compiler(Wikipedia, says WF is compiled with) for the WF browser.
May not be so fast with AMD. Is this true?
P.S. what the hell is Wikipedia doing with all those donations, hell Wikipedia’s CPU tables, do not list any CPU’s/SOC’s address bus widths, let alone the CPU’s/SOC’s complete pinouts, what the hell good is CPU microprocessor tables without address bus widths! what’s that Wikipedia, the money’s for big salaries of the big wigs, and keg parties! The COMPLETE Pinouts of ALL CPUs/SOCs please.
2^32 = 4,294,967,296 addresses can be addressed by a 32 bit address bus.
2^48 = 281,474,976,710,656 addresses can be addressed by a 48 bit address bus. (used on current 64 bit CPUs)NOT ANY using that much actual RAM.
2^64 = 18,446,744,073,709,551,616 can be addressed by a 64 bit address bus.
The CPU may be 32 bit(data bus and standard register size) or 64 bit(data bus and standard register size), but that has nothing to do with the Address bus width, and the Address bus is what determines how much RAM can be directly addressed, by a CPU.(Note SIMD, and special function registers do exist that are wider, but a CPUs Bit-ness is usually measured by the data bus, and standard register width{machine word size} and has very little to do with direct addressing of RAM). There are some exceptions, back in the days of 8, and 16 bit processors, but generally, on current systems these rules apply.
This does not include a description of virtual memory hardware, and page tables, which can trick the application into thinking(so to speak), that a computing platform actually has that amount of Physical RAM, but could only have 2GB, 4GB, 8GB, etc. of actual RAM.
I’ve built 64bit Firefox for
I’ve built 64bit Firefox for a while now, just like Mozilla would release it: https://github.com/Jan02/firefox-x64/releases