Being stupid. Utterly. | Saturday, 30 June 2012 |
Monitoring prices on amazon.co.uk is interesting - prices go up and down and resellers will often sell a "used" or "refurbished" item for more than the "new" price. Very strange.
Out of boredom, I have been watching amazon.de, amazon.fr, and amazon.it.
Interesting watching the different prices and the different used/reseller markets.
Bang! Spotted a bargain on amazon.it. A brand *new* 27" iMac at better than my target price point. Placed the order. Feeling happy with myself.
(I have tried to place orders with resellers on amazon, and so far, for iMacs, none have been accepted; either these are scam artists or Amazon is allowing multiple orders to be placed when the seller only has a single unit).
So, I am feeling real proud of myself. (My Italian is very poor - but enough to know what buttons to press!).
Then it dawned on me.
What *exactly* had I purchased? It sure wasnt a 27" iMac. It was the 21" model. Annoyingly, the layout of items on the different Amazon stores is different and even the large/small pictures of iMacs are used inconsistently. Every Amazon, except amazon.it, lists the screen size in the description. Not amazon.it - you have to pore over the technical description to see the telltale 1920x1080 screen resolution.
Oops! But I just placed the order? Panic !
Luckily - and I love Amazon for this - you can cancel an order, within about 30mins of placement. So, I did this. Being *very* careful to try and understand what the screen phrases said (my Italian is not good enough to handle the subtle language in this area).
Fortunately, translate.google.com was my friend. Very helpful to paste key sentences and phrases into the translator and find I had hit the right buttons.
So, there you have it. A total buffoon. Io sono stupido!
The Heat is on | Saturday, 30 June 2012 |
Most people probably turn to "ps" or "top". Most people (who use the tools) understand most of the data displayed, but probably not all.
I have created "proc" - a top-like tool to show lots of graphs and key data from a Linux system. Dtrace can expose tons more data (if you know where to probe).
But, paradoxically, the more data you can see, the more rarely you actually use the tools. ("proc" provides views of data from the /proc filesystem - this filesystem contains huge amounts of interesting data).
Q: What is the one "true" data point?
A: Heat (temperature)
When I look at my iMac (I use the excellent iStat tool, which puts little temperature graphs on the menu bar) - its the temperature which tells me everything. High temperature means the system is busy.
Strangely, although Linux has a lot of measurements, there is nothing which corresponds to the CPU temperature (so far, as I have found). I have used lm_sensors and psensor and a bunch of other tools, but there is no CPU temperature (there probably is, but I havent found it, and its not easy to find it either).
Heat == Power. Power == $$$. So, more heat, more $$$ (watts) being expended. And that is a very good average of what your system is doing. (All the other stats simply provide fine grained data on subsystems, whether CPU, Graphics, HD, or other motherboard sensors).
In looking at the 27" iMacs - they are rated at ~360W of power. That is a lot. That includes the screen, GPU and CPU. Most of the time, many Macs are going to be idle. (My iMac hits 90+C during heavy duty operations, such as media encoding). I hear lots of reports of "hot" iMacs as being normal. In fact, the Macs (and my laptop i7 CPU) are rated for up to 100C operation; at 110C, they will shut themselves down.
Thats a *lot* of heat.
Strangely, one cannot tune a system based on heat. When I use my laptop, and its heavily compiling or number crunching, it gets hot. The fan speeds up, and it gets noisy. I may pause the operation - I hate to think what my laptop would be like if I allowed it to max out for very long periods of time.
Wouldnt it be nice if you could get a Watts or $$$ figure out of "ps"?
DTrace for RaspberryPi - first problem | Saturday, 23 June 2012 |
Perfectly understandable, as /proc/kallsyms uses a sizable chunk of memory.
Need to find a workable workaround. Maybe later kernels (wheezy?) will enable it, or may have to go and create a custom kernel.
Ok, heres the link for building your own kernels - which packages you need, and Ubuntu cross-compilation:
http://elinux.org/RPi_Kernel_Compilation
Hard Projects | Saturday, 23 June 2012 |
Heres the options.
UTF-8 in CRiSP. CRiSP supports UTF-8 but strangely its not as natural as I would like/expect. Things are complicated because multiple things need to be supported (cursor parsing and treating UTF-8 as single chars, display, display in char mode, X11, Windows and MacOS). I need to work out how it currently works or does not work and go fix.
DTrace. Yes, its time to get back on this hobby horse. Theres two immediate avenues of research -- the PID provider - figure out what is breaking in the user space; and the other is ARM support. Now I have a RaspberryPi, there are a few challenges ahead.
ARM Challenges
I'll briefly summarise what needs to be done to the code for ARM. Initially the goal is to just target the RPI - I dont have enough ARM devices to toy with, so it sets a baseline.
Firstly, we need an instruction decoder for ARM. The one for the Intel instruction set, mostly courtesy of Sun, is obviously useless for ARM. If I am lucky, the instruction decoder is simple for ARM, since all instructions are 32-bit (are they?)
Next, much of the code assumes we are i386 or amd64, and thats no longer true; so, even compiling as ARM is going to require various cleanups and tweaks.
Lastly, building on the RPI itself is going to require one or more kernels. At least I need the kernel sources, but it may well be that I need to cross-compile - the 256MB RAM may be too low for dtrace to compile - I hope not.
But the last stumbling issue is 256MB of RAM is very puny. I think the smallest VM i have tried is 384MB of RAM. Although dtrace isnt very big, it can use quite a chunk of memory for per-cpu data structures, and this could leave too little for the rest of the system to work. So, I may need to turn off the instr provider and try and be very feeble in memory requirements.
A Bad API: XtAppAddTimeOut | Sunday, 17 June 2012 |
This API is used in CRiSP, and has been for around 20 years. Recently, I encountered a strange bug, which was annoying me. The flashing cursor would periodically stop. At first I thought it was a regression in some aspect of performance or an issue with one of the newer features, but it wasnt. It *was* being tickled by the new features, but they were not directly responsible.
Lets consider malloc() and friends. People who use malloc() (or new[] for the C++ folks), know that you can free memory and two types of problems present itself: (a) using memory after it is freed, and (b) forgetting to delete a memory block, leading to a memory leak.
Now, the X11 timer API is similar to malloc. If you fire a timer, it has a finite life, and if you end up with multiple timers for the same callback, they will all fire. This can cause issues, such as "frantic cursor" flashing, or whatever the code is which handles the callback. Its typically easy to detect this scenario, and callbacks will often have preventative measures to avoid core dumps which could be caused in this kind of scenario.
Now, XtRemoveTimeOut is particular nasty. The current X window implementations tend to reuse an internal timer structure. You can do this:
XtRemoveTimeOut(id1); ... XtRemoveTimeOut(id1);
and although the second XtRemoveTimeOut is redundant, it can have a strange side effect. If the code between the first call and the second calls XtAppAddTimeOut, then the memory freed for the original timer is reused by another timer. The second call to XtRemoveTimeOut then removes the timer for the "other code". We may have taken away someone elses timer.
This is what was (erratically) happening in CRiSP. Multiple calls to remove the same timeout were not protected, and this lead to a piece of code removing the timeout for another piece of code.
CRiSP doesnt use many timers, but one is tied to cursor flashing, and if that gets removed, it will never fire again. So, the cursor stopped flashing. (It would flash if you typed in as the screen display code would need to hide/unhide the cursor, but it wasnt obvious this was happening).
I ended up debugging this by adding an LD_PRELOAD trace library to observe the "double-free" scenario, and eventually found a style of coding that could lead to this (and, in v11.0.7, is fixed).
Strangely, I had hit a similar problem on MacOSX, where CRiSP implements the X11 primitives as a layer on top of the Cocoa interface, but hadnt noticed the same issue on X11, as it took a number of events, in the right order, to reproduce the scenario.
XtAppAddTimeOut()/XtRemoveTimeOut() need to either not reuse memory or provide a debugging API to detect cancels of freed timers, and avoid timer reuse.
2560x1440 $300USD | Saturday, 09 June 2012 |
where-are-all-the-high-resolution-desktop-displays
One of the best looking machines available today is the iMac. Its an all-in-one device, and as I write this, hopefully Apple will release new devices at next weeks WWDC. A 27" screen with builtin computer, or an overpriced computer with builtin screen. Less cables and sockets to contend with.
Laptops have stagnated at the silly 1920x1080 resolution so I read the above article, totally agreeing how the computer market as degraded into a Pop-Idol "me-too" kind of world. Laptop screens peaked at 1920x1200 and then went south, presumably due to the cost reduction by sharing the LED/LCD TV market.
iMacs are expensive, as are all Apple products. Sure, there is an equivalent HP or DELL contender, with a whopping 2560x1440 screen, but they are largely overpriced. One can buy a DELL UltraSharp U2711 or Apple Cinema display, but at around 800 GBP, by the time you factor in a decent computer, and mouse/keyboard, you are not far off the Apple price.
Now, on reading the slashdot article, I was staggered/amazed. On that page are references to *tons* of 2560x1440 displays, presumably coming out of the same Asian manufacturers as the genuine Apple/DELL displays, but with variations (low cost connectors).
Approximately 200 GBP, or $300 USD. Thats the cost of the largest display you can buy today (largest == large screen, very high resolution; otherwise a 1080p 50" or 60" TV could be considered "largest").
That is shockingly cheap. And yet no one, apart from Apple/HP/DELL, let you know these displays are available. Even the component sellers and techno-gadget pages (Engadget, TheRegister) make any references to these.
So, one could spec up an iMac-alike machine for close to half price (not as ergonomically desirable as the iMac, but at least you can select and change components cheaper and more easy).
Heres an ebay search to show you whats available:
CRiSP On Raspberry Pi | Thursday, 07 June 2012 |
Its also a shame (or good?) that RPI now has some competitors - USB flash drive self contained computers. For now, these are vaporware, and, until recently, RPI was vaporware too.
I quickly got it up and running - one advantage of not getting a RPI on the day of release, is that the web is now chock full of tips to getting a RPI working.
Having gotten the device, I am getting a 64GB SD card - the 4GB one I am using lets me get off the ground, but project#1 is to connect the RPI to active speakers and use it as a music device.
Alas, my cheapo USB wifi dongle is appearing flakey, so am going to get another one, and hope this works. (Presently, the existing dongle loses the connection after a few minutes, and a reboot or pull-out / plug-in, is required to restore sanity).
I am trying to get CRiSP built/installed on the device - the first new "CPU" port of CRiSP for quite a few years (the last was for Itanium). This is proving pretty straightforward, but the wifi is making life a pain (ie go sit in front of the TV/screen).
I am planning to release CRiSP for RPI as a free product - no licensing, just to give "something back to the community". It will appear on the crisp download page (http://www.crisp.demon.co.uk/download.html) in a few days when the port is ready.
If I can clear the backlog of bugs and issues, then I may take a poke at looking at dtrace for RPI. This in theory should be straightforward but it will be painful to compile a kernel on the fairly feeble 700MHz CPU, so I may have to look at cross compiling. The 256MB memory of an RPI may prove a limitation too.
No dates on dtrace - purely we will "see".
I have some potential other "projects" to do on the RPI. First is the music server, second is a video server - ideally on the same physical HW as the music server, but I dont think this is doable with only one audio out (although it might be if sound solely comes from the external speakers...need to experiment).
The other project is to replace the very aging G3 imac which serves as the CRiSP FTP repository - a simple ftp/web server. This should be very untaxing - but ideally I want some cased RPI's before attempting this.
Being lied to. For 20+ years | Monday, 04 June 2012 |
At last, a version of Windows which didnt crash. Windows NT 3.0 was a 32-bit operating system. CRiSP has been compiled for Windows 3.1 - a 16-bit operating system. Porting and debugging CRiSP was painful - any bad behavior would likely require a machine reboot - Windows 3.x was too unstable; errant applications could write anywhere.
NT 3.0 was protected from this nonsense. CRiSP exists as two main versions - a console version, and the GUI application. (This is true today, not only for Windows, but for Unix/Linux and MacOS).
Whats the difference between a GUI application and a console application? "main(int argc, char **argv)".
Well, Windows has a different startup function - WinMain. WinMain is a bit like the function invoked before main() is invoked. It doesnt get an array of command line arguments. It gets a single argument for the command line, and its up to the application to parse the command line.
All the CRiSP tools (and all tools, even non-Foxtrot ones), parse that command line.
An annoying problem is that CRiSP relies on printf() for debugging and for some of the macro commands. When you link a Windows application, you use a different command line - to signify its a GUI applications ("link -subsystem:windows,4.x" or equivalent, depending on the version of Windows you are targetting).
By contrast, a console application is very POSIX like in its behavior. printf() writes to the console (cmd.exe) you invoked from and you can pipe the output.
I recently started using MINGW (http://www.mingw.org/) - a port of the GNU compiler collection to Windows. MINGW is different from CYGWIN which provides a Unix/POSIX like system under Windows. MINGW can generate Windows applications - so, no need for the SDK. (MINGW is simply brilliant; I'll explain why, below).
In porting CRiSP to run under MINGW, I ended up building a "premake" like build tool, because the Windows and Unix makefiles had grown too long in the tooth to easily adapt. After building CRiSP under MINGW, I did something *wrong*. I built crisp.exe as a console application. I didnt realise this. And was surprised that printf() was writing to the console.
Up until now, CRiSP has had to emulate the console, and writes to a popup dialog. Its not a bad way of debugging, but a nuisance, despite some nice little features which help me.
But why was MINGW crisp.exe writing to stdout quite happily, yet the Win32 version of CRiSP.EXE did not? I had attempted to solve this problem many years ago, and found that somewhere in the Windows startup code, the STDIN/STDOUT/STDERR handles are closed and not available - by the time WinMain() is called, it is game over.
But when crisp.exe is linked as a console application, this does not happen. stdin/stdout/stderr are left intact. So, a GUI application can read/write to stdin/stdout !
I dont know why all the Windows documentation makes a big play about the linking "subsystem", but if you ignore it, life is more palatable.
Why is MINGW so good? Because "gdb" just *works*. I can use gdb on Linux, MacOS and Windows and have the same debug environment. Even hardware watchpoints work. gdb may not be everyones favorite debugger, but it is might powerful.
Prior to this I was using the free Visual C++ Express edition. (I had purchased the Professional Visual Studio a long time back, but Visual Studio, despite being a very powerful product, just changes too often with whatever current flavour of technology is current, and its not cost effective for software which runs cross-platform). With the advent of Windows-8, its not clear whether Microsoft is trying more to create a walled garden, like Apple is/has done.
So, the GNU compiler collection is great - providing a consistent compiler platform across many operating systems. MINGW fills in a gap - which was how to use GCC to create Windows applications.
Currently CRiSP is being built via Visual Studio, and MINGW is only being used for internal debugging, but this may likely change in the near future.