dtrace4linux - support and issues | Tuesday, 28 February 2012 |
Can I ask people - when facing compile issues - to ensure they have all prerequisites before reporting compilation issues.
It is my goal to have DTrace work on all releases of Linux - both forward and backward releases, but time and space doesnt permit validating each release (of Linux) - especially older ones. I do attempt to look at Ubuntu and Fedora (usually after a prod by the community). I try to avoid downloading the latest Ubuntu releases until at least a week or two has passed, so that I can feel more comfortable I am not going to suffer resume+suspend or wifi or video glitches, like I have done in the past.
One thing I have done brazenly and badly, is keep track of what I installed on my system in terms of packages. When I first started DTrace, it was not a virgin Linux distro, but one polluted with my favorite development packages.
By the time the first DTrace port went out, I couldnt tell the difference between a virgin install and my own system. Over time, I have realised this is important for newcomers who download and try out DTrace, that it works "out of the box", and have attempted to create scripts (get-deps.pl) to semi-automate updating your system with the required packages. Even for Fedora and Ubuntu, and 32 and 64 bit variants, validating that the script works is nearly impossible.
I hope to do better in the future, but there can be no guarantee.
One of the commonest issues reported is missing header files, e.g. for 32-bit compiles. Even doing a package search using yum or apt-get or whatever the package installer of choice is called, is a nuisance - as you get flooded with possible matching libraries. Most of it makes sense to me, but it likely confuses newcomers to Linux, or people who are not programmers. Unfortunately, that is life on Linux.
(Maybe I should be adopting a standard RPM format so that the dependencies can be described properly; something for a different rainy day).
At the moment, for a short while, I am switching my focus back to CRiSP - adding features and enhancements; I find it good to switch back and forth from DTrace to CRiSP, as I sometimes lose focus on what I am trying to do. DTrace for Linux should be in a good state, and there are a lot of miniprojects to work on (I had started on the CPC provider, and theres more SDT probes to work through, along with refinements on the INSTR provider).
If people find DTrace good for them, feel free to publicise or drop me a mail, so that I know it is worthwhile.
Doing a dis-service to your fans | Monday, 20 February 2012 |
One feature I like, when I am mentally challenged, is the recommendations. Based on browsing or purchase history, it learns what you like and suggests related material. In general, this works really well, and occasionally pops into view, Music or Videos or Series I may be interested in.
One thing which I find very annoying - and its not Amazons fault - is the way the music industry takes loyal fans and treats them strangely.
I have listed with Amazon that I have and like Pink Floyd. I am totally surprised at how many, for example, "Dark Side of the Moon" albums there are. "Basic", "Advanced", "Intermediate Edition", "With Bells on", "Advanced sound" (I made these up!). So, my "recommendations" consists of 5 copies of each of their albums. I have the albums - even a few a couple of times over.
But I dont know what to do ! I could rate each variant as "I like/5-star", in which case it might just dig out more versions of the same things or other music I have, and I end up with no useful recommendations.
I could say "Not interested" to Amazon, but will that mean it thinks I dislike "Pink Floyd" or will stop showing me the variants across all types of music.
Oh well. The wonders of technology.
DTrace and the CPC provider | Tuesday, 14 February 2012 |
Performance measuring is a large topic - I can only cover it briefly here. Statistical sampling (similar to classic Unix "prof" and "gprof"), is great for weeding out hot spots in code. The first time you profile, its easy to quickly find areas to optimise.
After a while, using those tools runs out of steam. In multithreaded applications and multicore CPUs, other factors quickly come into play, e.g. lock contention, cache misses etc.
The Intel and AMD chips provide quite sophisticated counters for measuring all sorts of things you may never have thought about. Unfortunately, not only are they different between Intel and AMD, but the counters supported will vary by chip family. (I dont even know if every new CPU is a superset of all older ones).
In user space, tools like "oprofile" and "perf" provide a way to gain access to these counters, and are great for deeper diving into hospots. You may know 90% of your time is spent in a matrix multiply, but you may not realise that 50% of that time is wasted in cache-thrashing.
Linux has had a varied past not adopting, and subsequently adopting profiling subsystems, and although it should be easy, it isnt. The difficulty of cpu family differences, and complexity due to the hardware of a system, means that providing a chip-independent API is difficult.
In recent years, AMD and Intel have provided new monitoring facilities which aim to allow instruction accurate samples to be made of performance. (Prior facilities relied on counters and interrupts which couldnt pinpoint the exact instruction, e.g. where a cache miss occurred).
In Solaris, and DTrace, they added the CPC provider - which allows probes to be placed based on the counter interrupts. The documentation is somewhat vague, because everyone is trying hard not to replicate the Intel/AMD documents which list the counters, since they evolve so rapidly. The CPC provider is not (currently) in Linux/Dtrace. Its been on my TODO list and I am just checking it out. It relies on Solaris handling user level requests and abstracts the CPU away, but, reading on the web, appears to suffer from inability to handle the "new style" counters from AMD and Intel.
[I believe that the old style counters are simply counters which can be set up to generate an interrupt, either on reaching a threshhold or on a periodic basis, ie sampling based monitoring. The new counters likely require an area of RAM to fill up, and the code in Solaris, and probably Linux may not be ready to support this, at least not on older kernels].
I may experiment with adding a CPC provider, just because I am interested in seeing these counters and the issues they present.
[I have tried oprofile, and hit problems since it does not work inside a VM; the newer 'perf' subsystem does appear to work inside a VM, but requires rebuilding the kernel to enable the subsystem].
github #2 | Wednesday, 08 February 2012 |
https://github.com/dtrace4linux/linux
I did say I was getting to grips with it!
Its raining. | Thursday, 02 February 2012 |
Started playing around with USDT - need to iron out some bugs. Alas, if you run the simple-c example app, and reload the driver, it will panic the kernel. C'est la vie.
Hope to fix in the next day or two.
Its worth briefly describing "why". When you run an application which has user space probes (USDT), the application will talk to the dtrace driver and dynamically create new probes against the PID of the process and the probes it creates. You can see these by running "dtrace -l" and diffing a before and after scenario.
Alas, when you terminate the process, the USDT probes arent removed, and this tickles a problem (which I need to solve).
What dtrace is trying to do is monitor processes as they die, and removing these stale probes, but it is not.
Now that my other dtrace problems appear to be over (subject to any naughty regressions I introduce), I can spend a little more time on USDT and go into more detail.
One area to understand is how a USDT works. I have written about this before and theres some good web articles on this. The technology is remarkably simple - but the implementation requires everything to be "just right" (we are dealing with kernel and user space, after all).