Mobile Gaming | Saturday, 22 June 2013 |
I personally like the "Tower Defence" games - there are many excellent examples of these games across the platforms, but equally, there are many that are questionable.
Firstly: pay or free? I rarely pay for games - simply there is little bang for buck. I did pay for some iOS games in the early days, but there are enough free ones that paying for something, which is worse than the free ones tends to put you off.
Next: big or small? I primarily want my devices for myself. The mobile devices are excellent fodder for people to garner a collection of things - whether you are into apps, games, books, movies, etc. It becomes a compulsion to fill your device with whatever you like - especially the "freebies". When the App Store first came out - collecting apps on my ipod was a great hobby. After a while, the "filling up" of the home screen problem arose - with so many apps, finding the one you were after became a problem. With folders, you could organise and reorganise the apps, e.g. "Games 1", "Games 2", ...
But then you had the "where did I put it?" problem. And you also had the "where was that app I recently downloaded" problem.
These problems became worse as you acquired more devices (iPad, Android, etc) because now theres a danger of trying to ensure each device mirrors every other one. Even that app (like a remote-PC app) which seems so useful but you will never actually use.
Then you have those apps which are never upgraded, despite being great, and not working on the latest OS. Or apps that are upgraded which add no real usability, except to support the latest OS or device or screen resolution - usually bloating the app.
Do I want to download a 1.7GB app to my device? (There was a game I was interested in, but at 1.7GB I decided "no - never"). These are not apps; they are *weapons*. If I decide to download such an app, it will want to copy itself to the main machine, and thence to the other devices, and every time the other devices need a reboot or restore, then you are hit with the download, all over again (in this scenario, a download from the PC to the device, not a download over the internet).
Back to Tower defense games: so - there are large ones, where large is 50-100MB+ and the small ones. (Remember the early days when an app was measured in megabytes?!)
Now the latest thing to be annoying - more so on App Store: those games that have in-app purchases. On principal, I will not download these, except by accident. I dont want an app to be a live trojan, which can help itself to my account. So I avoid these. Of course, most of the free apps fit into this category.
But there are paid-apps which have in-app purchases too. Really - the worst of all worlds. I understand developers need to make a living, and its a great idea - but there are games out there where it is not really clear if you are buying with real $$$ or in-game $$$, and so you could be the victim of one of these things.
The nature of the app universe is that all apps can have in-app purchases, along with adverts, such that your mobile or ipod becomes more like one of those glossy magazines you used to see at a Doctors surgery - and no longer a technical gadget.
The diversity of the app stores is great - there is a lot to learn from the many apps out there - from aesthetics and graphics, to trying to understand the skills and algorithms of what makes the app or game run.
I am not a "gamer": I used to buy games for Nintendo or XBox or PS2, but I found the "hoarding" mentality would kick in - buy a game because of its cover or genre, and find that within minutes the game was a "dud" - a total waste of money. With mobile gaming, you are not putting down as much money, but there are many "dud"s out there.
And what makes all of this worse, is that there are zero good reviews of games on the internet. The many sites doing "App of the Week" seem to advertise shovelware. In pre-mobile days, the Gaming Websites were attrociously bad guides to games (this is very subjective - many sites may be good for specific genres of games and people). And magazines are the worst. The problem with all reviews (and this applies to games and non-gaming equipment) is how much reviewers lush over the product.
Its only when the upgrade or version 2 comes out do they exult what the problems are.
I will give a for-instance: the Microsoft Kinect. It was good for one game. It was very poor as a technical item - I dont think I ever saw a bad review of it. And certain gamers react negatively to it because its useless for those games. Even the new XBoxOne with its new version of the Kinect is enthused all over the place (ignoring the XBox180 jokes), but it misses a "point". (I wont describe the "point" because this is subjective - I am sure that it will get great use from a certain class of gamers).
I really shouldnt write about games, being a non-gamer, but I have and would spend money on games, if I could be sure of the enjoyment and bang-for-buck aspect of games.
ttytter - twitter client | Wednesday, 19 June 2013 |
So, +1 to ttytter. Thank you!
twidge: an example of what? | Wednesday, 19 June 2013 |
Let me quote the INSTALL file:
Sorry, need more here...Try:
ghc --make -o setup Setup.lhs ./setup configure ./setup build ./setup install
To run, you will need to have curl installed
Thats fair enough. So, what is "ghc"? I presume its the GNU Haskell compiler. So we run this:
$ ./setup configure setup: At least the following dependencies are missing: ConfigFile -any, HSH -any, MissingH >=1.0.0, aeson >=0.6.1.0 && <0.7, curl -any, hoauth >=0.3.4 && <0.4, hslogger -any, mtl -any, network -any, parsec -any, regex-posix -any, text >=0.11.2.0 && <0.12, utf8-string -any
Thats very informative. What exactly is it referring to? These are not apt-get packages (I am on Ubuntu). I cant even tell what language twidge is written in - Haskell - I presume. I am probably missing much of the runtime.
Even stranger - the size of the source distro is around 200kB of files - code and documentation. Most of it is unintelligible. Compiling the setup program creates an 8MB binary. (actually 4.9MB of code + data and the rest is symbols).
And I still dont know what to do next.
Oh well, looks like twidge gets deinstalled...and off to search for something which I can make sense of.
IT is complicated. No it isnt. Yes it is. | Sunday, 16 June 2013 |
There are lots of interesting problems one can set yourself as a developer - experimenting with algorithms and graphics.
I come from a time when 4K of memory was large enough (the early Z80 micros), and 256K on the bigger minis was huge.
I am always perplexed when staring at my browser - whether its slashdot, engadget or twitter. The algorithmic part of these sites and programs is mostly irrelevant - pretty visuals and icons along with Web2.0 style auto updates.
A long time ago, it was easy to write your own browser from scratch - write an HTML parser, break the text up in to boldened sections and hyperlinks. Thats a great way to learn to program, by the way.
On real web sites, the HTML is not clean and the html can suffer from imbalanced markups. A real web browser has to decide how to handle the inconsistent and illogical real world examples of non-conforming HTML. And the XHTML and w3c initiatives to define "correct" HTML was abandoned.
Assuming you got so far to render reasonable examples of HTML, you half targetted maybe less than 0.5% of a real web browser.
CSS and Javascript, multiple tabs, non-blocking APIs, network connections, caching, and animated visuals - each of these is a large project in itself. In fact, for CSS + Javascript, everyone has abandoned all versions of implementation and pretty much settled on webkit (Safari, Opera, Android - although Windows and Firefox hold out on their own implementations; apologies to Firefox if I got that wrong).
All the browsers are reasonably huge as binary downloads, and are astonishingly huge (and impressive) as source code downloads. Few people attempt to compile a browser from source.
What this demonstrates is the power of OO coding and class libraries which do well defined things. Its a long time since I looked at the code of a browser - there is a lot of good stuff in there, but its so huge, few people can begin to understand much more than a handful of disconnected methods.
Its a bit like going from a mud-hat to a skyscraper in terms of technical achievement - such that now, nobody tries to build a brand new skyscraper - they just take the existing model of a skyscraper and apply small changes to do something new.
As I write this, I am staring at my twitter page in one of my many tabs in the browser, realising that the layer upon layer of stuff to make that page happen uses ever more resources to do it - even a high powered machine with a lot of RAM struggles to make the experience responsive. (Twitter said there were 1200+ new tweets, and an attempt to load them made Firefox time out and suggest the javascript on the page was unresponsive).
Many code optimisations in a mature program may lead to tens of percent performance increases, and CPU speeds are only going up by 10-50% per year, yet a small javascript piece of code can use up tens of thousands of percent more resources - so its a failed arms race for cpus, compilers or web browsers to get ever more faster. The nature of the entire stack of software development is to effectively stop thinking of low level optimisations and let people do things like load thousands of tweets into a page (and this is not twitters fault). I am guilty of writing HTML pages with a couple hundred thousand rows in the table (this is useful when doing initial analytics - see how bad the problem is, before deciding how to avoid displaying 200k rows on a single web page). [The solution is a REST interface or a CGI type script on the server to allow pagination of results - but that is ugly for large data sets].
What we have is a situation today where the volume of data to look at (and, in the case of facebook or twitter, it has your attention for maybe 1/10th of a second) is huge, and most of the software industry spends its effort trying many different ways to visualise that data.
CRiSP Buttons | Saturday, 15 June 2013 |
My eyesight is not as good as it used to be - I used to be happy with a 6x10 or smaller font on a 1920x1200 17" laptop screen, but now pretty much stick to 7x13bold for my fcterms.
Anyhow, I tried this setup in CRiSP, and it felt like the screen was shouting at me. But I could see the logic in using a huge font. When using a 5" phone, or 8" or 10" ipad, which tend to do one thing at a time, the font sizes are such that the size of the font should be the same, no matter which screen, as a percentage of your field of vision. Eg at a longer distance, one needs a larger font.
(As an aside, I prefer to watch a 5 or 8 inch handheld device compared to the 40" TV screen, because the smaller screen is *bigger* and occupies more of my field of view. People laugh when I tell them I sit in front of a switched off TV and use the handhelds to watch video).
Anyway, back to CRiSP, in using this huge font, I noticed some things I didnt like. Any border (either bold border or dotted line border) was almost invisible - the huge fonts meant that single pixel renditions go unnoticed. So, tabbing though a dialog becomes unobvious what is selected.
I changed the default to show the current button in an orange hue and that looks so much better. But I noticed the dotted-line around the currently selected toggle is still "invisible". A comparison with Firefox's popup dialogs doesnt emphasise this issue in the same way, whilst using the normal default fonts.
So, I need to amend this for the toggle buttons.
I also noticed that for a huge (eg 32-pt font), that the graphics to indicate a radio button selection havent scaled - they are like teeny dots on the screen.
Amazing how life in the large font lane is so different from life in the ant-sized lane !
iOS7 - What I want.... | Monday, 10 June 2013 |
But there appears to be something wrong in the gadget industry - Android and Apple.
Since the dawn of time, computers have had a "general purposeness" about them. They can be calculators, spreadsheets and data processors. With the advent of mobile computing and the impressive feats of video and audio playback...something is lacking.
I have a Galaxy Note II - a great device (apart from a few caveats). But one thing I truly love is the multi-window multitasking feature. Its difficult to believe this is a great thing - how on earth can you have multiple windows on a tiny device and make use of it? Even on a laptop or desktop screen, one can spend ages moving windows around to uncover some application hiding underneath.
But this very simple (and yet, flawed) feature in the Samsung product provides a true value-add feature, unseen elsewhere in Android or Apple land.
Its distracting to flip from one app to another on both iOS and Android. We all need our snippets of mail or info.
But what do you do when watching a TV or Film on your beloved device? Switching away to do the "whats happening in the world" is a real context switch. Now imagine, you are watching a film, and want to check something on imdb.com?
Well the Samsung device is the only device that supports this. Its not quite picture-in-picture of a true desktop - you get a side-by-side windowing arrangement - not disimilar to Windows 2.0. But that is all you need - split the screen in half and you can keep an eye on the video whilst doing some research or whatever.
Alas, the samsung feature doesnt work well with most apps. E.g. imdb.com works fine until you do a search, and then imdb.com doesnt handle the restricted width window. (It did used to work but imdb.com updated the app to break this function, alas).
Why arent more value-adds like this?
MX-Player for Android is a truly wonderful and brilliant app. You can fast forward or rewind by swiping across the screen. Yet Apples Movie player is like using a black-and-white TV set to watch video - so much innovation could have been present (especially the awful fast-forward timebar).
I guess it is comforting that such useful but small features are not standard as it allows the developer community to add value to our devices.
Two-Dimensional Knapsacks .. or heatmaps | Sunday, 09 June 2013 |
Heatmaps are a nice way to visual complex data sets, and are used, for example, as a neat way to visualise disk usage.
I kind of put it on a back burner...I have no explicit need for it, other than a desire to implement one.
Brendan Gregg recently posted an article on graphical heatmaps for dtrace output...which reminded me that I was curious how easy/difficult it is to implement them. [http://externaltable.blogspot.co.uk/2013/05/latency-heat-map-in-sqlplus-with.html]
Of course, something so simple is not so simple to implement. Whilst "thinking" about heatmaps, I thought I would have a go - something simple, but a "trial". Its not obvious how to implement them and some research, e.g. on Wikipedia [http://en.wikipedia.org/wiki/Heat_map] explain what they are and some digging leads to either some complex (mathematically) articles, or snippets of implementation.
A long time ago, I implemented a knap-sack algorithm, used to pack Z80 assembler code into overlayed ROMs. I didnt know what I was doing but the results were pretty good - no longer needing to manually lay out the code, and although the algorithm wasnt optimal, it was sufficient. (Given a set of object files, figure out how to map them to the ROMs, to minimise the few ROMs needed). Something along the lines of start with the biggest and iteratively add new object files to the ROM until its full.
The algorithm I wrote worked well, because there was enough slack to not need perfect packing (which becomes algorithmically more complex, e.g. trying all permutations).
A typical heatmap is a 2D visualisation, e.g. a tiled surface area representing the size of files against the total size. The algorithm to do this starts off quite straightforward:
Take the size of a file. Divide by the total size of all files. Take the sqrt() of the size to create a rectangle of this size. One can add the rectangle to the display surface, top-to-bottom, left-to-right, but doing this would lead to large empty areas (gaps).
The result - although interesting - isnt a heatmap per se. Its a graphical representation of the file sizes. To avoid gaps requires that, for each square being layed out, to find a gap where the rectangle can be placed - effectively an O(n^2) operation, but also a 2D version of the knapsack.
I started experimenting with adding this to CRiSP's "du" macro, but I am not sure I will like the results. The macro language isnt fast enough to do millions of operations, and even if it is viable, the DBOX_GENERIC object - used to allow custom macro painting, doesnt model the kind of thing I want. (DBOX_GENERIC is really just a blank X window, so painting and repainting is expensive). What is really needed is a canvas/pixel arrangement so that CRiSP can render the visualisation without requiring recomputation of the heatmap.
With todays machines being fast and having lots of memory, this is more like the Wayland approach to graphics - dealing with bitmap buffers, rather than drawing primitives.
I need to implement the code in CRiSP as a "first pass" to look at the efficiency, and decide how to allow CRiSP to do this at several orders of magnitude greater efficiency, inside itself.
The "Ultimate" Laptop | Sunday, 02 June 2013 |
Reread the sentence above - those units are correct!
My current laptop - about 1-2y old, is a 4-core (with hyperthreads) i7, 8GB RAM, 2x500GB hard drive at 1920x1080.
This week sees the advent of the Haswell chips - promising great battery life - enough to warrant an ultrabook as a replacement for an iPad or tablet. For many, 4-8GB RAM is a sweet spot. For a developer, it isnt.
My laptop maxes out at 8GB RAM - I tried to get a 16GB RAM when I purchased it, but it was just out of reach and the i7 specs too confusing. (I purchased my DELL XPS laptop because it could support 16GB RAM, but turns out the deal I got was with the wrong graphics card so I was limited to 8GB). 8GB was ok, but my prior laptop was 8GB also. 8GB RAM was now the new "minimum" spec for VM development work. (Most of my VMs have 512-800MB RAM, but by the time you run 2-4 of these, and firefox, and the bloated KDE desktop along with all the other garbage on a typical linux distro, 8GB is too small. 16GB is too small. 32GB? Well - thats not bad. Maybe 64GB is better!)
So what would the ideal laptop be (and I use a laptop - not for portability, but for comfort). I'll take a stab at predicting my ideal laptop, based on current and near term future. I could never imagine todays technology back when my 25MHz i386 was current. So lets try this:
128GB RAM. Maybe next generation will support 64GB, so 128GB is not too far out. This should be good for 4-8GB RAM VMs (I cant imagine what I would run in those VMs, but I suspect Windows 8+ and Ubuntu will require that).
32-core cpus. We seem to have not budged from "8" (or 4 + 4 hyperthread) for a while. I really dont care about graphics - I dont play games, and those wasted transistors in the Ivy-Bridge/Haswell would be better spent on real computing power. (Maybe I do need some onboard graphics, for GPU type work, but not one where most of the chip and transistor count is for graphics, sitting idly).
The more I think about this, I think maybe 32-core is a little puny. How about 128 CPU cores? Most of the time you dont need more than a handful, but to do complex threading work, you do.
Screen resolution: more is better. When we went from 1024x768 -> 1280x1024 -> 1600x1200 -> 1920x1200, it was great. Significant screen real estate and no pixel doubling. With ipads and other 10-inch devices starting to hit more than 2560x1400 screen resolutions, thats cool. But you cannot see the individual pixels - not on a 10" screen. So, I suspect what would be good is something like a 6000x3000 pixel screen, and, with pixel doubling, equate to 3000x1500 type screen (in a 17 or 19" form factor; if I go to 27", 30" or above, we can get to the 4k/8k screen technologies but currently they are too limited, e.g. scaling to reach those dizzy heights).
How about disk? With todays 4TB drives, its too small. I think 2 x 50TB is more palatable - enough to keep every copy of the linux kernel, compiled and uncompressed, along with those 4+GB VM images. Party that to a 2TB flash/SD cache for that data, and now you are getting there.
It doesnt seem unreal to get to 8TB in the near future for a single hard drive - but the rate of change has slowed - but, I can but dream :-)
Now - I wander what my watch and mobile phone type device is going to be.
Amazon "funny" items + Firefox memory usage | Sunday, 02 June 2013 |
For additional amusement, you can search for the most expensive (usually mispriced) items on Amazon.
Which takes me to here:
http://support.mozilla.org/en-US/kb/firefox-uses-too-much-memory-ram
I cannot tell if this is an April fool piece of humor or they actually think it is helpful.
I run firefox on my Android phone and my laptop. I dont understand why firefox uses 3GB of RAM on my laptop to do "nothing much" (I have a variety of tabs open and I appreciate web pages use a lot of memory). It would be interesting to set an upper limit on FF, and not find its memory use swing hugely depending on the web pages I visit.
If you read the link above, it explains that certain plugins may use a lot of memory and to try disabling them, to see if the safe mode of firefox is any better. I am trying to imagine what the average non developer, would do having followed that instruction.
The funniest recommendation on that page:
Are you able add to more memory to the computer? Memory is cheap and will provide a huge performance boost.
Yes, memory is cheap. I have 8GB RAM - I wish I could install 16 or 32GB, but, Firefox, how much memory am I using? What is causing it? (JSON? Images, CSS?)? And how can I flush memory use.
On Android - *all* web browsers are poor in terms of caching - the browsers will reload (and hence use valuable data bandwidth) because the caching is extreme (ie they dont do any), to conserve memory. But on the desktop, it appears to be the other way around - use up memory til the system is swapping heavily and a forced restart is needed.
I ran "about:memory" on my browser - and this is very useful, but maybe this should show the memory as a hierarchy based on the open tabs and hidden tabs. (The memory tree shows the windows but in a way that is not easy to decide what is the correct course of action).
Maybe google's sub-process based window tabs is better?