Thursday, 2 January 2014

Torque - See What Your Car's Really Doing

This may sound like something that only mechanics and car nuts would be interested in, but it should help everyone with a fairly modern car keep tabs on what's actually going on beneath the surface, and it costs almost nothing to do.

You need:

  1. An Android phone
  2. A car with an OBD-II port (you very likely will have)
  3. A bluetooth OBD-II module. This costs around £6 / $10 and are easily available e.g. on eBay. I personally have a cheap elm327 version as in the link and it works perfectly.
Torque (https://play.google.com/store/apps/details?id=org.prowl.torque) has been around for years. You may already know about it. The free version gets you started, the paid for version is definitely worth getting for all the extra features.

You will need to find your OBD port in your car. They are generally located in the driver's footwell, or behind the ashtray, or under the centre console by the handbrake:


Plug the OBD module in, pair your phone to it and open Torque. Make sure you turn on faster communication in the settings as the ELM327 adapters support this. Also it's worth setting up your vehicle profile to get accurate MPG, horsepower and 0-60 times.

From there, this is what you'll get:


In Realtime Information you can set up your views like so:


Here, I've chosen a theme which goes more with my Skoda Octavia, then I've added the following dials:

Vacuum / Boost. My car has a turbo so I can see that it's boosting correctly. I know that on overrun (decelerating in gear) it should be pulling a good vacuum. Mine seems to read -14 PSI which shows I don't have a vacuum leak in my intake, essential for a turbo car.

Revs. Nice to see alongside other items.

Throttle. Shows how much throttle the engine is being given from my electronic accelerator pedal. Interestingly, my car never goes higher than 89%, this is probably OK but I'll follow it up.

Coolant temp. Most dashboards show 90 degrees C as soon as anything over 70 is read, so the needle isn't going up or down all the time. The car needs to know the actual temperature and measures it, so it's useful to see if it starts climbing above normal, you can remedy it early.

AFR. This is the Air to fuel ratio and I have chosen c - called for and m - measured. These 2 should be fairly close and show that what the engine is trying to get with the mixture, it is actually getting. Mine is a bit unstable at tickover which is a symptom of the VW TFSI engine which gets a clogged up intake, disturbing clean airflow into the engine.

There are loads of others you can put on there on multiple screens including GPS speed, car measured speed (to see how accurate your odometer is), lots of engine readings, misfires, emissions, calculated horsepower, 0-60 times etc.

There are also other features such as reading and clearing fault logs, which can save you a fair whack by not having to pay a garage to do the same thing.

When used with a phone in an in-car holder, this app is priceless. It's the equivalent of the car having hundreds of extra gauges and dials showing you everything what's going on. You'll get to know what stuff normally says and what they mean, and over time if something starts reading differently, or you see a wrong value, you'll potentially head of expensive repair bills.

Thursday, 14 February 2013

The Ultimate Operating System


I know what this seems like, and please, hear me out. I'm not a Johnny-come-lately blurting out 'hey, wouldn't it be cool if...' about this kind of stuff. I've been around the block. I've had a good few years to come to my latest conclusion and without further ado, here it is.


Microsoft should use a Linux base for an OS.


Right, now I've got the awful part of saying it out the way I'll go into detail about how and why I think this would be such a good idea. For everyone. Yes, including Microsoft.


OK, here goes. I'm not going to go into the ideals and fundamentals of open-source, freedom, free software and all the stuff that Linux represents, that's everywhere else, and I really don't have time. For this post I'm only interested in the ideals. Henceforth I present my case m'lud.


Exhibit 1: Desktop Virtualisation

VirtualBox Seamless Mode.
This is VirtualBox with its Seamless mode turned on. Seamless mode allows you to run 2 different operating systems e.g. Windows and Linux together so the applications and windows of one look like they're part of the other. It's not perfect but it's an excellent feature; VMware do the same with their offerings. This came about because a lot of people desktop virtualise. Why do they desktop virtualise? Because no one single OS can do everything. Windows 7 is the best Windows yet (including Windows 8) but it's still missing some stuff that you have to get a Linux distro up and running for. What stuff? I'll get to that shortly.


Exhibit 2: Ubuntu



If it wasn't for Ubuntu, desktop Linux wouldn't have gotten half as much coverage as it has done over the last 6 years. It was once on an upward curve of adoption beyond the growth curves of any other OS but various issues have seen that checked, not least their decision to go with Unity against almost everyone's wishes but that's for another time. It's still there, plugging away, and the enlightened people at Canonical know that tomorrow's bread is buttered in the mobile space so we're seeing Ubuntu phone cropping up here and there. Major issues are lack of a really good enterprise level office suite, Exchange access that actually works (seriously, Evolution has to be the biggest joke of a software product ever written), and not being able to just run anything Windows only.


Exhibit 3: PuTTY



PuTTY is the de facto telnet/SSH client for Windows. Everyone knows about it. Well, almost everyone. Its massive use is due to the fact that people need to SSH into stuff. This news not be earth shattering to many, but in a corporate world where Microsoft Exchange, Outlook, Office, SQL Server, it's easy to forget that actually, the Internet (and therefore the world) runs on Linux:


Exhibit 4: Apache + NGinx


I can't really embellish this a lot more. And it's kinda still exhibit 3. Apache + NGinx drive the vast majority of the Internet, IIS is dying after a brief spurt in 2007, and outside of forced use of Microsoft products in corporate environments where people don't know any better, or where .Net developers are cheap and easy to find, Microsoft isn't really taken seriously.


Get To The Point Already


OK, I've established that Linux is a big player, right? I won't even mention Android with its meteoric growth and 85% global market share. Oops. Linux has some stuff that Windows doesn't and will never have but really should:

LAMP. Yes there's WAMP. But Microsoft needs Apache with modules as they are now. Fully 100% compatible and portable. In config files. With a nice shell to control it.

Pseudo-terminals. I think PTS support is the biggest obstacle in this whole thing. But they're absolutely required. Running Windows on PTS is absolutely required for...

SSH. I cry a little every time I have to remote desktop into a Windows machine. I mean WTF. Why can't I SSH into any remote server? There's been ways to try and make this happen with varying degrees of success, but never perfect. And certainly never built in from the install. SSH with SCP and SFTP are the absolute best ways to remote admin anything.

BASH. There's a bit of an Elephant in this room and it's called Powershell. Some people evangelise it, but it's essentially a copy of the old KSH from Unix with some application specific 'servlets' (modules) available. Some bits are nice, some are horrible, all of it is based on the awful Windows cmd application running inside a GUI with that horrific font they've always used and the ridiculous incoherent tab-completion and weird command history (up then down, WTF?).

Centralised Package Management. Every Linux distro has a way of installing software from the command line with GUI programs if you're that way inclined. Nothing has ever bettered apt / dpkg for Debian based systems such as Ubuntu. Windows has Windows Update - for Microsoft products. Why can't I install and update my torrent software, my video editing suite etc. etc. from the central software repo in Windows? Why can't I just type apt-get install sql-server and it goes and fetches it from the Internet, then installs and configures it for me? Are Microsoft campuses like The Village where everyone who works there thinks it's still 1995?

To Summarise


The above items bring me to my main point in all this. Linux is almost perfect as a desktop and server OS. 

It just doesn't run Windows apps. 

I dream of a day where I don't have to install Windows inside a VM on my Linux machine just to be able to run the unfortunately ubiquitous MS Office suite. The Linux kernel, with its inherent security and stability, with the proven GNU libs and software, with the software that runs the Internet, with everything well thought out, with win32/64 libraries, with everything that just works, is my dream. I doubt it'll ever happen, especially not with Ballmer killing Microsoft day by day, but if I had my way it would. One OS to rule them all. Over and out.

Wednesday, 19 September 2012

The Linux Sysadmin's Toolkit

If you're an admin for Linux servers that are going to be doing any real kind of work, you'll need to know how to make sure they're running right. You need to understand how the CPU, memory and disk get utilised by the OS, and to do that you need to know how to use a few essential tools and how to interpret the results.

I'll try and write this so admins coming from a Windows background can understand how Linux works compared to Windows.


CPU


There are 3 things you need to be concerned about with regards to how the system is performing from a processor point of view.

1. CPU utilisation percentage
2. CPU run queue (load)
3. CPU I/O wait

In Windows, you mostly are just concerned with CPU utilisation from just a single percentage figure with the maximum being 100%. This isn't really the whole story though.

In Linux, if you have 4 cores in total, the CPU utilisation will be shown as a percentage with the maximum 400%. That may seem strange to someone used to seeing 100% as the maximum but it actually makes more sense to add up the totals of each core and show you all the cores together.

The thing to understand about this is that CPU utilisation isn't actually how busy your system is. It's a part of it, but not the whole story. It's simply a representation of how long the CPU was seen as being busy over a time period. If the system looks at a CPU core for 10ms, and that core was busy for 2ms, it will be 20% busy. It will then sample the other 3 cores, and add those to the total. If they were also all busy for 2ms out of that 10, the total CPU utilisation of the system will be 80%, with the maximum being 400%.

We have a percentage of how busy the CPU is, why isn't that the whole story?


Well, if a CPU core is used by 1 process for 2ms out of 10ms, but for those 2ms there are also 5 other processes waiting to jump on that core and do stuff, a utilisation of 20% isn't really accurate is it? Because for those 2ms, the system is actually trying to do 5 times more than it actually can.


When you understand that both the CPU utilisation _and_ the CPU load are factors to be taken in conjunction with each other, you can interpret what the tools tell you.

top


top - 16:49:48 up 14 days,  6:18,  5 users,  load average: 2.75, 3.64, 3.87
Tasks: 315 total,   1 running, 314 sleeping,   0 stopped,   0 zombie
Cpu(s):  8.3%us,  1.6%sy,  0.0%ni, 86.4%id,  3.1%wa,  0.1%hi,  0.5%si,  0.0%st
Mem:  98871212k total, 81501412k used, 17369800k free,    50108k buffers
Swap:  9446212k total,    32700k used,  9413512k free,  7281528k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                                                                                                           
32031 mysql    -10   0 71.4g  69g 6132 S   97 74.2  13958:45 mysqld                                                                                                                                                                                                         
28358 root      20   0 44176  15m 1280 D   63  0.0 210:01.71 mysqlbackup                                                                                                                                                                                                       
19749 root      20   0 69624  18m 3160 S    7  0.0   3:02.40 iotop                                                                                                                                                                                                             
 6183 root      RT   0  161m  37m  17m S    5  0.0   1188:46 aisexec                                                                                                                                                                                                           
 5397 root      39  19     0    0    0 S    1  0.0 241:39.12 kipmi0                                                                                                                                                                                                         
 2971 root      15  -5     0    0    0 S    0  0.0  65:52.74 kjournald                                                                                                                                                                                                         
    1 root      20   0  1064  392  324 S    0  0.0   0:16.52 init 


top is the standard age-old tool for quickly looking at what's going on. The system above has 8 cores, which are hyperthreaded, so I know that it has 16 logical processors available (generally found out from cat /proc/cpuinfo). When I look at the processes, the mysqld process is taking 97%, but that's from a maximum of _around_ 1600%.

Then, as I said above, we can also look at the system load, which is represented as load average. In the output above, I can see that the first figure of 2.75 is the average over the last 1 minute, 3.64 over the last 5 minutes and 3.87 over the last 15 minutes.

What do these figures mean?

While the system was sampling how much was running on each usable CPU core, it also looked at how many processes were waiting to run. Out of 16 queues, around 3 were waiting at any time, 1 process has taken 97% of 1600%, and another 63%. Therefore, actually, what looked like the system was fairly busy, really has a lot of room to get busier. Until we're consistently filling almost all of the queues (16 on this system), and the CPU utilisation is getting nearer a total of 1600%, we don't need to worry.

The following is a Munin graph of the same system. We can see that the Max idle is 1600, and we're nowhere near it.


And this graph shows the load average


Again, it backs up what we saw from top. We don't have to worry about the load on this system, and we know this by combining the utilisation and load average to see what's really going on.

But what about IO?

A 3rd variable comes into the mix which complicates it a little further, which is IO wait. If a process is running on a CPU core, but you have a slow IO subsystem (e.g. a slow disk, or a saturated fibre channel host bus adapter), the process can be waiting for an IO request to complete. This in turn increases the CPU utilisation and the load average.

If you're seeing high CPU usage and need to find out why, you can see if it's IO wait by using vmstat.

These figures are from a web server. You can see that the io column has no blocks in and a few blocks out now and again. The blocks out are likely to be log files being written, and as it's a web server, everything is already in memory and doesn't need to be read in. No IO issues here.


procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 3  0    100 266688 302404 5135804    0    0     0     0 17822 24008 15  1 84  0  0
15  0    100 266532 302404 5135820    0    0     0   124 16510 24104 12  1 87  0  0
 0  0    100 265504 302404 5135848    0    0     0     0 18332 24488 17  2 82  0  0
 4  0    100 264312 302404 5135852    0    0     0     0 16986 23787 14  2 84  0  0
 6  0    100 265476 302404 5135864    0    0     0   344 16711 23948 15  1 83  1  0

This one is from a database server. You can see that the blocks in and blocks out (1 block is 1KB) is a lot larger, and as I ran this as vmstat 1 it's cycling every 1 second, so it was reading 30-50MB/s and writing 10-20MB/s.

procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 3  0     33    879     57  24340    0    0 36108 15304 27169 42413 10  3 83  4  0
 2  1     33    852     57  24384    0    0 36576 15762 26833 40486  9  3 85  4  0
 2  0     33    780     57  24439    0    0 47296  9735 21587 33633  8  2 85  4  0
 0  1     33    721     57  24499    0    0 49496 19881 22993 36320  8  3 86  4  0
 4  0     33    683     57  24547    0    0 42176 13993 23573 36176  8  2 87  2  0
 5  2     33    632     57  24595    0    0 38748 10611 26785 41753 11  3 76 10  0
 4  0     33    584     57  24636    0    0 37636 12618 23149 36298 14  2 80  4  0
 6  0     33    551     57  24685    0    0 34060 13504 25268 39642 14  2 79  5  0
 3  0     33    481     57  24739    0    0 50360 10973 24150 37552 13  2 82  3  0

That's a lot of throughput. Is it affecting the CPU by waiting on IO? Well, the 'wa' column in 'cpu' are figures in a percentage of 100%, so the single digit figures compared to the 'id' (idle) column, it's not waiting on IO for very long at all. Therefore, this server is heavily utilised for IO, but it's not affecting CPU utilisation or system load due to having decent IO.

IO is a bit easier to see by using iostat, which gives you % utilised of your IO subsystem.

# iostat -x -d 1
Linux 2.6.27.29-0.1-default (xxxxxxx)      09/19/12        _x86_64_

Device:         rrqm/s   wrqm/s     r/s     w/s   rsec/s   wsec/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda1              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sda2              0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sdb               0.00     0.00  349.00   71.00 92320.00 21136.00   270.13     0.99    2.35   1.04  43.60
sdc               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
dm-0            145.00  1795.00  349.00   71.00 92320.00 21136.00   270.13     1.15    2.77   1.07  44.80
dm-1              0.00     0.00  493.00 1857.00 91976.00 14856.00    45.46    30.01   13.64   0.19  44.40
sdd               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
sde               0.00     0.00    0.00   14.00     0.00  9490.00   677.86     0.20   14.57   2.00   2.80
dm-2              0.00     0.00    0.00   14.00     0.00  9490.00   677.86     0.21   15.14   2.57   3.60
dm-3              0.00     0.00    0.00   14.00     0.00  9490.00   677.86     0.21   15.14   2.57   3.60

Even easier to use is iotop, a layer on top of iostat to make it more like a top style interface.



Finally, on to memory

Memory is really misunderstood in Linux. Unused memory is inefficient. Some people see the below and panic.

# free -m
             total       used       free     shared    buffers     cached
Mem:         96553      93561       2992          0         73      21044
-/+ buffers/cache:      72443      24110
Swap:         9224         31       9192

That's 93GB used of 96GB installed RAM in the server going by the Mem: row.

Wrong. The Linux kernel grabs as much memory as it can, leaving only a small amount unused and then dishes it out to applications which request it. Anything which isn't requested by an application is then utilised for buffers and caches, including the IO buffer. Read the values in the -/+ buffers/cache line. 24GB is free, and 72GB is used by applications. That's obviously still a lot, but this is a database server, and we want to give the database engine as much memory to cache stuff as possible.





Here's another one from a slightly more modest server:


# free -m
             total       used       free     shared    buffers     cached
Mem:           463        397         65          0        134        136
-/+ buffers/cache:        126        336
Swap:          475         10        465

463MB of RAM and only 65MB free?! Nope, 336GB free as the kernel hasn't needed to dish it out and has allocated it to buffers and caches.

Sunday, 2 September 2012

How To Make A Great CAT5 Cable

If you want a really nice rack install, or you're cabling up long runs of Cat 5/6 twisted pair cables with RJ45 connectors, you really need to make your own cables. It can be a bit of a pfaff getting the RJ45 connectors on the end, but I'll show you how I've found the easiest way to do it.

You need:

* Crimping tool. Can't really do it without one.
* Side-cutters. If not included in the crimping tool.
* RJ45 connectors.
* Cat 5/6 cable.
* Recommended: Cable tester. Really helps if you're doing a lot.

1. Measure the cable to the correct length and cut.

2. Strip the sheath off using the crimp tools sizer, or about 15mm.

3. There you'll see 4 twisted pairs of sheathed cables. 1 x Orange / Orange-White striped. 1 x Green / Green-White striped, 1 x Brown / Brown-White striped, 1 x Blue / Blue-White striped.

At this point, make sure your boots are on, if you're using them.

Don't forget your booties, it's cold outside!








4. Untwist all 4 pairs about 2 or 3 times, trying to straighten each wire as you do so.

5. If you can get them into the same kind of order as on the right, it will be a bit easier. Don't worry if not.












6. By pushing the cables together, get the order of Orange, Green and Brown, with the striped cables being first.

7. Then get the Blue cables, swap the stripe / plain order back to front, then push in between the Greens.

8. Try to straighten them as much as possible. If you need to trim them with side-cutters, now's the time.









9. While holding tightly, gently push the wires into the RJ45 blank, checking they're still in the correct order before pushing all the way home.














10. Then push all the way home very tightly. Check the wires on the edges are pushed up to the end.



















11. Then crimp tightly.

12. And admire your handywork, testing with a tester if you have one.








Monday, 2 April 2012

Prying Dave's Folly

Hello Dave
So the UK government wants to monitor your emails, web usage, calls and texts. Let's all panic.


But, let's not. This government is akin to a racist on Twitter, expelling rubbish directly from their deep, dark fantasies, not allowing the inhibitions to take hold and reign them in. I doubt this one passed the Cabinet Reality Assessment Procedure (CRAP) test before going public. Fabulously they've gone public very prematurely on this one, not bothering to actually consult anyone who knows what they're talking about, or listening to the wrong people, and at the same time showing their hand. While you should worry about the intentions and the implications, you needn't worry about it actually happening. Here's why.


Let's start with web site visits. This is the easy one. There are a few stages involved in getting your computer to communicate with a server somewhere else in the world which serves you web content. The first part is a DNS lookup. You type in www.terroristdaily.com and your computer goes to your configured DNS server (usually your ISP) to translate the DNS address to a numeric IP address for direct access. This DNS request in plain text and transmitted in the clear and can be intercepted by your ISP. So it's relatively easy for the government to pressurise your ISP into syphoning off all DNS requests into their own systems to log and analyse. They would be supplied lists of web sites that you have requested in your browser, or indeed, anything that your computer has been instructed to access, either by you or by some nefarious bit of spyware / malware you've been afflicted by. This is also something to be mindful of, not everything your computer accesses is initiated by you.


The next bit is the actual transfer of data, and where it gets interesting.


Years ago, it became apparent that we needed to secure information exchanges across the Internet. Something called Secure Socket Layer (SSL) was invented, and then more recently, a enhanced version called Transport Layer Security (TLS) superseded it. This works by encrypting the data sent between the web site and your computer. For your specific session, it can only be decrypted at your end, or the on the web server serving the content; nowhere in between. 
By now, every web site should be SSL by default. It should be that every web site you visit should make your address bar turn green. Those sites that don't do this just need a little kick up the owners' / administrators' arses. (Conveniently, a good way of doing this is to introduce something which will make users much more likely to visit if they do, such as a government snooping on you...) Then, if web browsers start attempting to connect to web sites using SSL first, and then falling back to plain text with a warning if it can't, all web sites would very quickly be SSL only. 
So that's intercepting traffic in the middle taken care of, but what about the actual web sites? Well, Big Bad Dave can't get every single web site everywhere to log traffic for him, so there's no way they can monitor what you're actually doing on a web site that is secure. They can see where you're going from DNS lookups (and then only maybe, I'll elaborate later), but not what you're doing when you're there.


Now that we've seen how SSL and TLS secures conversations between users and web sites, it makes email security a bit easier to understand. There are 2 major ways people use email; webmail and remote mail servers. Communications with webmail servers happen through your browser and are subject to the exact same encrypted SSL traffic as visits to any other web sites. Take GMail and Hotmail for example, both enforce SSL by default, so sending someone with a GMail address an email from your own GMail address means that the mail never goes outside of Google, and neither your ISP, or anyone else can see anything to do with what's in the mail, who it's for etc. They would need Google to give them info for that...


It gets slightly more complicated when mail goes outside of webmail. If you send an email from your GMail to terrorism@letsblowthemup.com, and the mail server for letsblowthemup.com is a standard old SMTP server, then the info is likely to be sent in the clear and it can be intercepted. However, any aspiring terrorists (which these plans are made to catch, right?) will encrypt the mail before it gets sent. There is Pretty Good Privacy (PGP), and the later, better version, GNU Privacy Guard (GPG), both of which are personal level encryption standards where it's simple to encrypt a document, or mail etc. The use of these tools is widespread, and will become the default simple way of doing things in mail client programs such as Outlook and Thunderbird, with support in webmail coming soon after.


A final consideration is a Virtual Private Network (VPN). VPNs were invented to provide privacy and security between computers communicating over the Internet. They provide a layer of segregation where the data is being sent over public networks, but the data can only be read if you're part of the private network. In the late 90s and early 2000s, when the arab states started getting more western immigrants wanting the same unrestricted Internet access they'd become accustomed to, they attempted to control access at the ISP level. This caused the users to use VPNs to subvert any interception or security in place, and meant that the ISPs were unable to block specific types of traffic inside the VPNs. I personally supported someone who moved from the UK to Dubai and found that he was unable to use Skype out there as the local ISP had blocked Skype in favour of their own paid-for version. A simple VPN configuration later and he was using Skype and there was nothing the local ISP could do about it. This is also where it comes back to DNS lookups, because if they're done inside of a VPN, they're also encrypted and can't be intercepted. The Tor Project is a mass VPN which allows normal users to remain anonymous in much the same way.


So to summarise, yes, if the government want to snoop on which web sites you're visiting, then it's not difficult for them to do so, unless you use a different DNS server. It becomes almost impossible for them to see what you're actually doing on a web site, unless the site owner is willing to give them traffic logs (which is unrealistic, it places far too much overhead on the site owners). Most people use webmail now so that is already secure and works by the same rules as secure web sites.


What can you do to make sure the government can't snoop on you? Well, start by encouraging the use of web sites which are secure by default. Always type https:// at the start of an address to attempt to connect securely.
Then, use a different DNS server to your ISPs. Nice, reliable DNS servers are Google's which are 8.8.8.8 and 8.8.4.4. It won't stop your ISP from being able to intercept DNS lookup traffic, but they can't just hand over simple logs.
Then, if you're not using webmail, use GPG to encrypt your emails.


In the end, the logistics of the government being able to snoop and log everything everyone does are insurmountable. But they only want you to be worried that they could be looking at any time, the fact they actually can't doesn't enter into it. Even then, it's almost useless for them to do so. This is due to logistics on databases, but that's a whole other topic.


It's also important to distinguish between anyone being able to look at where you're going, and being able to look at what you're doing. The media would like you to believe it's the latter, but this is only in very few cases. So for now, don't worry about it. Just think about your privacy and look for encryption everywhere. The web is built on some pretty solid foundations and a massively right-wing, paranoid, temporary government can't change that.

Monday, 26 September 2011

Returning to Windows: Part I

This series will run through how I've moved from being a full-time Linux desktop user to using Windows full-time. First up, a little about me.

I've been in the IT industry professionally for about 15 years, but I've always had something to do with computers. I started at around 7 learning BASIC and writing a few programs on my Sony Hit-Bit MSX.

(MSX is a whole other topic for another post, so I won't go into detail here.) 

After that I worked in Technical Support for a UK computer manufacturer before moving into R&D and finally into Linux system administration. During my time in R&D I had a lot to do with Microsoft and Windows in particular, developing PC builds and configurations around Windows from ME to Media Centre. I got to know them very well and didn't like the way either Microsoft, or Windows, worked.

So I changed my home desktop computers to Linux. Mandrake Linux at first as it was extremely user-friendly and attractive. It may look a little dated now, but against Windows 98 it was amazing.


I then moved onto Ubuntu as of Breezy Badger (5.10, released April 2005) and continued to make my protest against the Microsoft wheel corruption racket that I'd experienced when dealing with them.

So fast forward until now. Ubuntu was fantastic at the start, it promised so much, but as of Natty Narwhal (11.04) it's delivered so little. When I first started I always needed a decent video editor for my family videos. KDEnlive, Cinellera and later PiTiVi were video editors which were always halfway there, threatening to become the all-purpose easy editing suite that Windows Movie Maker had become. But 5 years later, it hasn't happened for one reason or another. A few weeks ago I simply couldn't hold out any longer, I installed Windows 7 on my main computer, stopped being a martyr and took the easy life again. My experiences since then have been mixed, but now I'm in the position where I can provide the fairly rare insight of an experienced Linux user discovering the pitfalls of being a newbie Windows user. In all honesty, I know what to expect, it's not that different to where I left it, but I still have a fresh view on most of it.

The next part of this series will be how the installation differed to what I'm used to. How easy is Windows 7 to set up and get ready to use compared to Ubuntu?

Monday, 1 August 2011

On the Origin of Cars and Human Beings

From my moderately recent uptake of running to attempt a Marathon in October it's struck me just how incredibly close the mechanics of motor vehicle engines are to the mechanics of Humans. A Heart Surgeon recently described to me that opening up the chest cavity and looking inside as "it's just mechanics". That got me thinking about just how right he is. Perhaps it's no accident that the two are so close in the way they work either; we've now had just over 100 years' of enhancement and refinement of car engines, so the natural process should follow the best formula for working with physics, proving that evolution is the ultimately the best judge. I'll try and explain how I think the two things match each other, and how they differ. By comparing them we can actually have a good guess about how cars will 'evolve' in the future, by looking at how the Human Body is better than an internal combustion engine.

Combustion
When exercising, you draw air into your lungs to react with food in your stomach, to be carried around the blood stream to your muscles. To help this process work, you need water. Without water you effectively dry up and grind to a halt. It's very easy to perceive this happening when you're dehydrating while exercising; you feel muscles tighten, your blood thickens and saliva turns very thick. This is very similar to how an engine will draw air in to the combustion chambers (lungs), combine the air with fuel (food) and use the chemical reaction to create energy. Also very similar is how lubrication is required. The engine requires oil to lubricate the workings otherwise it will dry up in the same way as your muscles and joints will dry up without water.

So the way the car engine produces energy is very similar to how we produce energy. What else is similar?

Tuning
When trying to get more performance out of your engine, it's a simple principle. More air and fuel in equals more energy generated. It's also remarkably similar with the human body. You exercise to increase your aerobic and anaerobic threshold by increasing the size of the energy pathways to your muscles. You exercise to make your heart and lungs more efficient to be able to make better use of the oxygen you can draw in and more efficient at using the energy. This is the equivalent of boring out your engine by making the cylinders bigger and holding more air, and increasing your engine's volumetric efficiency by allowing as much air to be drawn in on every stroke (breath) as possible by porting, polishing and general head work.
You can increase the fuel pump size, the fuel line size, the fuel regulator and the injectors / carburettor. This is the same principle as the size of your veins increasing to allow more fuel for your muscles to be carried. Most tuners will be familiar with the term 'Italian Tune-up' where giving an engine a blast will remove old deposits and scale and effectively allow your engine to perform better. The exact same thing happens with your body.

The Future
I firmly believe that any advances in the technology of the internal combustion engine have so closely imitated the mechanics of the human body that it's a fairly easy conclusion to come to that future advances will go down the same route. Where can we expect to see engine technology go next, if it does mimic human biology?

Well, the body's ability to self-heal and adapt to load is a great advantage. It makes it able to last, literally, a lifetime. As you ask your body to do more and more demanding things, your physique and attributes change to allow greater abilities at these tasks. Some modern cars have the ability to change the air and fuelling pattern depending on how it learns your driving style, so we can see some of that already. Even some technologies such as variable valve timing are also evidence that this is starting to happen, so I think we'll see engines able to adapt better to different driving conditions, demands and styles in future. We may also see more advancements in the way that engines are able to heal themselves by making use of modern materials.

It's no surprise that engines and the human body closely model each other, as both are intended to do the same thing; convert chemical energy into kinetic energy and deal with any associated wear and tear in the course of things. The human body has had quite a head start, but we're pushing transportation devices forward faster than evolution could manage, to the point where we could very well merge.

Can you see this trend continuing? Where will it go?