new machine...new cpus.... Saturday, 05 March 2005  
Theres a lot happening in the hardware world - Intel have announced a slew of new CPUs - multicore sets, alongside the new EM64T 64-bit AMD compatible enhancements.

The trouble is, which machine and cpu to aim for? My current machines are not top of the line but more than sufficient. Having a dual-core chip is certainly desirable, but not if it generates too much heat and fan-noise.

64-bit is definitely desirable. I will never understand why, when the 80286 was introduced - a great CPU compared to the 8086, and then the 80386 came out, along with virtual memory, how they were so close and innovative. Yet it has taken 20 years for them to add 64-bit enhancements.

They are now talking about virtualisation technology in future CPUs. This was nearly there back in 1985, but they missed out a few key features which stopped it being done. The 80286 could support virtual 8086 DOS boxes and Windows and other "extenders" took advantage of that.

In recent years, it took VMware to bring the importance of this technology to the mass audience. A brilliant product, which everyone has fallen over themselves to catch up with.

Now we have XEN (Cambridge Labs) and a raft of open source cpu emulators letting you run multiple operating systems on one machine.

But virtualisation could be a pig. If the open source community or a company like VMware owns the inner ring technology, then you - the customer, can do what you want with your PC.

But say Microsoft decides virtualisation is key to some future feature in Windows, say, sandboxed Internet Explorer, or DRM. Then VMware or XEN wouldnt get a look in - they would need to create a hyper-hyper-visor to allow, say, Linux and MS Windows to run at the same time.

An OS vendor should have nothing to do with hyper-visors, as it spells lock in for the customer. A 3rd party should have control of this, and I fear the worst.

Hopefully AMD will track Intel and in a year or two, we will all be buying dual core 64-bit chips with hypervisor technology that doesnt use too much energy...


Posted at 09:55:40 by Paul Fox | Permalink
  crisp and the undo file Saturday, 05 March 2005  
It was recently brought to my attention that CRiSP wasn't handling large files very well (large >1GB but also >4 or even 16GB).

CRiSP works well for reading these files, but as soon as you do a "Save As" operation, then it was downhill. A patient user had let it run for more than 24 hours trying to save a 16GB file...when it finally ran out of memory.

This was under Windows XP. Of course, Linux is superior here :-) It ran out of memory in minutes instead of days :-)

The problem was to do with exponential behaviour in saving and buffering the files and the undo information recorded for the files. Having looked closely at the situation, some small tweaks and performance was much more linear.

Looking further, there were some pathological conditions where it would try and store too much in memory, causing heavy swapping or process termination. (Linux's out-of-memory killer is a severely misdesigned non-feature).

After further work, it looks like the situation is under control - attempting to edit and save, and resave large files works as expected - its still slow, but thats purely due to the amount of disk I/O and is a function of the file size and memory on your system.

I am hoping to experiment with some additional features to enhance performance and memory/disk usage next.


Posted at 09:45:29 by Paul Fox | Permalink