From rearing children to building robots

I came across an article today on /r/technology about a new turtle-style robot that’s being used to teach kids to code.  Though I’m not really interested in trying to get my 3 year old to code, it sounded interesting so I followed the link to give it a look.  The opening line of the very first paragraph of the article absolutely stopped me in my tracks; I couldn’t believe that our world had gotten so borked up that someone would actually say these words and be OK with them:

In order to build new human children who can compete in tomorrow’s post-work world, we must teach kids to code. Everyone agrees, even Obama…

Build human children?  There is so much messed up in that statement it’s hard to say where to begin.  Even if you get passed that, the next thing we get hit with is this mythical “post-work” world that’s supposedly going to happen.

I’m neither a philosopher nor a philosopher’s son, but lets just break these statements down a bit beginning with the first.

To start, the phrase “human children” not only sounds weird, but it’s a bit ominous if you peel back the tech-journalist-flair.  None of the major dictionaries I consulted, from modern dictionaries all the way back to the venerable Webster’s 1828, gave room for children to be anything but human.  To use the phrase “human children” specifically then infers that there is some other type of child, or at the very least that there is room for debate regarding who qualifies as a human child.  This could reference the abortionist doctrine that one doesn’t become human until birth (or some other arbitrary point after conception), but the fact that they want to “build” human children reveals that it’s much more likely that they refer to the possibility of a non-human child via artificial intelligence, the thought that if we can manage to create an intelligence comparable to our own in robots and computers, then we can finally prove that we ourselves are simply molecular machines no different than they would be.

The thought of artificial intelligence has both intrigued and frightened mankind for some time now, but until recently it was something that we watched on Star Trek and was never going to happen in real life.  I still don’t believe it will truly happen, not in the sense of actually achieving human-levels of sentience, but recent attempts have been good enough to cause even some science-guys to balk at the idea.  You would think that Stephen Hawkins of all people would be thrilled at the chance to finally put us religious creationist nuts in our place by proving that God is totally unnecessary to create human-like intelligence, but he is smart enough to realize that if we were smart enough to become god, there would be nothing to stop our creation from becoming god and pushing us out of the way.  He’s not alone; a quick web search will show you that Bill Gates and Elon Musk agree.

Now I’m not one of these folks who is scared to death of the Terminator coming to get me; on the contrary, I don’t have that much confidence in man’s abilities.  What it does show us though is the mindset that these folks have: we as humans are not special.  We are just blobs of protein bumbling about a self-created universe with no reason to exist or not exist.  While it may not be immediately obvious why this is a problem, it becomes apparent when we consider that these individuals and the companies they represent have a strong, and even sometimes direct, influence on society and specifically the public education American children receive.

Consider Common Core: if you don’t know what it is then you won’t have to look far to find very strong opinions one way or another on it.  Regardless of where you fall on the hate-it-love-it spectrum, try to put aside the opinions and emotions for just a moment to consider a few facts:

  1. The Bill and Melinda Gates foundation funded the development of Common Core, and also provided political influence to get it adopted.
  2. One of the major talking points in promoting Common Core has been the promotion of S.T.E.M. (Science, Technology, Engineering, and Math) education.
  3. One of the complaints about Common Core is emphasis of the process over results. The linked article here is actually trying to defend Common Core, so you get both sides of this argument.
  4. History in general and US History specifically (particularly anything that a reasonable person would perceive as positive) seem to be de-emphasized or omitted in many places.  Honestly this is just one article of many that demonstrate this.  Spend some time reading blog posts on this topic; you can’t really rely on one random blog post for accurate news but when you have them popping up all over the place complaining about the same things, you start to think there’s something to it.

Now, for an analysis of what I see in these facts.  First off, it would be asinine to think that Bill Gates paid for all this stuff and he didn’t inject his thoughts and priorities into the process.  Especially if STEM is a focus, it would make sense to have one of the industry leaders guide the process of training folks to enter the field, right?  Now lets look at the next point: no matter how you interpret the argument presented in the article, I still come away with the feeling that they are encouraging the students to trust the process with less regard for the results/consequences.  As in, “shut up and do what you’re told.”  I know it’s subtle, but I can’t help but see it against the backdrop of everything else Common Core appears to be standing for.  Moving on to the last point: if we don’t know where we’ve come from, how we got here, the sacrifices it took, the passions of the men who dreamed of something better than a tyrannical government and oppressive nobles, what would be left for the young people to be passionate about now?  If you’ve ever read “A Brave New World” you see that one of the secrets to controlling people is controlling passion: without it visionaries become dull and indifferent, patriots become cogs in political machines.  Not only that, but we know what wisdom tells us becomes of folks who do not learn from history.  We see a great movement in our nation right now, particularly among the youth, toward a Socialist government.  They are no longer being taught about the epic failures of socialism in the past, nor indeed the consequences in terms of loss of liberty in current socialist-aligned governments abroad.

This is what I see coming from men like this, whose influence is ever greater upon the next generation whether by the narcotic effect of constant connection or the repressive education schemes they’ve come up with: you are nothing more than a machine that does the bidding of the elite.

That brings me to the second part of the quote from the Verge: the “post-work” world.  A lot of folks have been duped into accepting all this by means of the promise of a tomorrow where we’ll never have to work in the traditional sense.  We’ll sit around all day while Rosie vacuums and Rudy shows us whatever entertainment we fancy at the time, and we’ll sip a latte brought to us by yet another of the army of robotic household servants.  The sun will power it all for free, and there’ll even be other robots that repair the household robots when they break.  There is nothing realistic about this but people seem to be willing to justify giving in to the pressure for even such an out-of-reach goal.  Here’s what scripture says about it:

Genesis 3:17-19
17 And unto Adam he said, Because thou hast hearkened unto the voice of thy wife, and hast eaten of the tree, of which I commanded thee, saying, Thou shalt not eat of it: cursed is the ground for thy sake; in sorrow shalt thou eat of it all the days of thy life;
18 Thorns also and thistles shall it bring forth to thee; and thou shalt eat the herb of the field;
19 In the sweat of thy face shalt thou eat bread, till thou return unto the ground; for out of it wast thou taken: for dust thou art, and unto dust shalt thou return.

Scripture guarantees us that we will not be liberated from work until all things are made new again.  Not only that, but if the technocrats have their way and get to “build human children” according to their whims, do we really think they’ll let us be equal with them?  They talk about equality all the time, but what they really mean is that we will all be equal in subservience: subservience through dependence.  What we are doing with our current technology and technocrat driven education system is creating a generation that is so dependent that if the system ever failed, death would be rampant.  How many young people know how to raise crops?  Dress wild game?  Use an axe?  Or even sew by hand?  What about cooking a simple meal?

I’m not talking about facing Hollywood garbage like zombies and shark-infested tornadoes; we have enemies right now that could cripple the US’s entire technological system by detonating a nuclear weapon high in the atmosphere, creating an incredibly strong EMP.  That’s totally ignoring possible natural catastrophes like major solar flares whose effects are at this point only the subject of speculation.

So when did we go from rearing children to building biological robots?

Let’s wrap up with some scripture that instructs us in the most important aspect of proper child rearing, to be complimented with applicable life skills:

Deuteronomy 6:4-9
4 Hear, O Israel: The LORD our God is one LORD:
5 And thou shalt love the LORD thy God with all thine heart, and with all thy soul, and with all thy might.
6 And these words, which I command thee this day, shall be in thine heart:
7 And thou shalt teach them diligently unto thy children, and shalt talk of them when thou sittest in thine house, and when thou walkest by the way, and when thou liest down, and when thou risest up.
8 And thou shalt bind them for a sign upon thine hand, and they shall be as frontlets between thine eyes.
9 And thou shalt write them upon the posts of thy house, and on thy gates.

Using AutoitX as a DLL in Python

We’ve been using AutoIt for a while now at the shop.  We do a LOT of repetitive stuff; for instance every computer that we clean up gets pretty much the exact same process.  Any time you find yourself performing the same steps over and over again on a computer by hand, that’s a sign you should be figuring out a way to let the computer do it for you (that’s what they’re for!) and AutoIt is one of the easiest ways to automate GUI interaction for those pesky Windows programs that don’t allow automation via command line options.

For a while now though I’ve been struggling with the fact that while AutoIt is really good at GUI automation, it isn’t very good at things like manipulating complex configuration data: a job which, unsurprisingly, often goes hand-in-hand with automating a few mouse clicks.  Often you’ll use automated clicks to install a program, but find that directly editing a config file is an easier way to configure it and that’s when AutoIt falls short.  It has file manipulation tools but they are very basic.  After all, it’s a GUI automation kit NOT a full fledged programming language; I don’t blame the AutoIt folks one bit for doing one thing and doing it well.  That’s the mantra us ‘nix folks live by!  So on and off over time I’ve sort of peaked around for a different solution, one that gives me both solid GUI automation and a full-fledged programming language with lots of good modules/libraries for various tasks.

Other GUI automation tools were eliminated pretty quickly.  I didn’t find anything that was a solid or feature complete as AutoIt.  That left me with looking for a way to glue multiple tools together that did NOT result in a house-of-cards setup that would be nearly impossible to replicate in the case of a failure or that would rely on the perfect alignment of planets to run reliably.  It didn’t take me long to realize that AutoItX was my best bet.

AutoItX (available on the main AutoIt download page) is basically a library version of AutoIt that can be used from other programming languages via a DLL or the Windows COM system.  It comes with some documentation for the interfaces, but for me the installer didn’t put it in the start menu; I had to dig in the program files folder to find the .chm file manually.  The trick was figuring out which programming language was best suited to the task of interfacing with the DLL and doing manipulation of config files in formats like INI and JSON.  The setup would have to be totally portable–we run our tools from a file share on our server and we can’t just go installing random runtimes on customer computers.  It also had to work well on Vista, 7, and 8.x, which makes things like PowerShell difficult since the varying versions provide different functions (e.g., PowerShell 1.0 doesn’t have native JSON support).  Recently my language of choice has been Python, and exploring that option is how I found what turned out to be a huge life saver: Portable Python.

Portable Python is exactly that: a portable python environment that can run from a local folder, UMS device like a flash drive, or a network file share.  Additional modules can be installed with relative ease, and it works on all the operating systems we support right now.  Python has lots of great modules for file management, file manipulation, config parsers for INI and JSON, pretty much everything I need.  Nicely enough, one can also easily call functions exported from a DLL file using an interface called ctypes.  They really mean for you to use AutoItX via COM in python, but that requires AutoItX to be installed locally which we aren’t going to do.

So here’s my setup: portable python and my scripts directory are stored in a file share on our server.  It’s easy to build a batch file that executes the portable python interpreter, passing my scripts as command line arguments to get them running without doing messy file-association mods on the customer PC’s.  The AutoItX DLL is also being hosted on a file share and my python script can copy that to a local folder then manually load it using ctypes.  Here’s an example of one of my scripts:

Please note that the big hurdle that I had to cross was related to Unicode strings.  At first I was just passing regular strings to the AutoIt functions (like WinWait) and they never matched any windows no matter what.  After some digging I found that AutoIt is expecting Unicode strings and it is assuming that all strings passed in are already Unicode and interpreting them as such.  Explicitly passing all strings as Unicode fixes that problem.

UEFI, SecureBoot, PXE, and You

For a while now we’ve had a need to PXE-boot computers that are set up for UEFI and SecureBoot but haven’t quite been able to pull it off.  For a long time, information on the subject was really difficult to come by and was mainly in the form of discussions by experts in the process of research and development.  I’m not an expert in the field of…anything really; I’m just an everyday computer repair grunt who knows enough about a lot of different things to make him wish he knew a lot about any one thing.  Over time I’ve occasionally taken the time to do a little more searching to see if tutorial-format information had become available and today I was not disappointed.

I came across this page on the Ubuntu wiki, and it was the glue I needed to finally make the pieces fit together in my mind.  It didn’t work exactly as described, so I’ll document my actual steps here.  I’m starting off with an already existing PXE environment that was working the old way, e.g. BIOS booting using pxelinux.  This is one of the big differences between my setup and the Ubuntu wiki page and that’s the starting point for this tutorial.  There are a lot of great resources out there for that so if you need help it’s only a web search away.

One thing to keep in mind is that my server is running CentOS, but my bootable PXE environment is LinuxMint.  What that basically means is that the config files will be CentOS related but I’m borrowing files from Ubuntu (since they have signed bootloader files easily documented in their tutorial).

First, here’s a list of files as per the Ubuntu wiki page:

  • shim.efi.signed from the shim-signed package, installed as bootx64.efi under the tftp root

  • grubnetx64.efi.signed from the grub2 source package (and shipped in the grub-efi-amd64-signed binary package), installed as ‘grubx64.efi’ under the tftp root

  • unicode.pf2 from the grub-common package, installed as grub/fonts/unicode.pf2 under the tftp root.

So getting these files is pretty easy.  We’re just going to extract them directly from the packages they belong to.  Now Ubuntu has a script on their page that is supposed to do this stuff, but I want to do it manually.  For one thing they have you grab grubnetx64 from the Saucy repo but that did not work for me.  It would load the grub menu, then work intermittently, otherwise giving two errors: “couldn’t send network packet” and “you need to load the kernel first”  Doing some searching it appears there have been bugs in that file fixed recently and the one from Trusty worked for me fine.  Here’s what we want to do (I ran these from my existing BIOS PXE environment):

apt-get download shim-signed
ar vx shim-signed_1.6+0.4-0ubuntu4_amd64.deb
tar -xvJf data.tar.xz
cp ./usr/lib/shim/shim.efi.signed ./bootx64.efi
# !! now you're ready to copy ./bootx64.efi to your tftproot
rm -rf ./usr data.tar.xz control.tar.gz debian-binary shim-signed_1.6+0.4-0ubuntu4_amd64.deb

wget -O grubx64.efi http://archive.ubuntu.com/ubuntu/dists/trusty/main/uefi/grub2-amd64/current/grubnetx64.efi.signed
# !! now copy ./grubx64.efi to your tftproot

apt-get download grub-common
ar vx grub-common_2.02~beta2-9ubuntu1_amd64.deb 
tar -xvJf data.tar.xz
cp ./usr/share/grub/unicode.pf2 ./
# !! now copy unicode.pf2 to your tpftproot under grub/fonts (e.g. /tftpboot/grub/fonts/)
rm -rf ./usr data.tar.xz control.tar.gz debian-binary grub-common_2.02~beta2-9ubuntu1_amd64.deb

Now we need to create a grub configuration file that will be stored on tftproot under the “grub” directory (e.g. /tftpboot/grub/grub.cfg).  Mine is a bit different from the one on the Ubuntu wiki since I’m mounting an NFS root and they were not.  My NFS root is on my server (192.168.1.15) under /exports/nfsrootqiana.  Keep in mind that in the kernel load lines , “(pxe)/” refers to the tftp root directory and that’s where the files vmlinuz-3.15.3 and initrd.img-qiana are located.  So here’s my config file:

# /tftpboot/grub/grub.cfg
set default="0"
set timeout=-1

if loadfont unicode ; then
 set gfxmode=auto
 set locale_dir=$prefix/locale
 set lang=en_US
fi
terminal_output gfxterm

set menu_color_normal=white/black
set menu_color_highlight=black/light-gray
if background_color 44,0,30; then
 clear
fi

function gfxmode {
 set gfxpayload="${1}"
 if [ "${1}" = "keep" ]; then
 set vt_handoff=vt.handoff=7
 else
 set vt_handoff=
 fi
}

set linux_gfx_mode=keep

export linux_gfx_mode

menuentry 'Linuxmint Qiana' {
 gfxmode $linux_gfx_mode
 linux (pxe)/vmlinuz-3.15.3 $vt_handoff root=/dev/nfs initrd=initrd.img-qiana nfsroot=192.168.1.15:/exports/nfsrootqiana ip=dhcp rw
 initrd (pxe)/initrd.img-qiana
}

We also need to tweak our DHCP config to respond to UEFI PXE requests.  Here is my entire updated config, including the lines that make the old BIOS PXE boot work:

# /etc/dnsmasq.d/dhcp.conf
dhcp-range=192.168.1.100,192.168.1.254,12h
dhcp-option=3,192.168.1.1
dhcp-option=15,alltech.local
dhcp-boot=pxelinux.0
dhcp-match=set:efi-x86_64,option:client-arch,7
dhcp-boot=tag:efi-x86_64,bootx64.efi
except-interface=wan0
dhcp-authoritative

As you can see, we’re using dnsmasq for DHCP.  On our old CentOS (5.10), the builtin dnsmasq didn’t work with that “tag:” syntax so I had to update the program.  The source version of dnsmasq doesn’t come with the init scripts for Red Hat so I had to cheat a little bit.  I built dnsmasq from source and did a “make install” like usual.  Then I simply edited /etc/init.d/dnsmasq and changed it like so:

# /etc/init.d/dnsmasq
# change...
dnsmasq=/usr/sbin/dnsmasq
# to...
dnsmasq=/usr/local/sbin/dnsmasq

Now it will happily use the nice new version of dnsmasq. No fuss, no muss.

We should now be able to boot PXE from either a BIOS motherboard or a UEFI motherboard with or without SecureBoot enabled. For the curious, here’s the sequence of events as I understand it. Please correct me in the comments if I get something wrong:

  1. You tell the computer you want to boot via PXE.  It sends out a PXE request.
  2. DHCP Server initially sets the boot image file to pxelinux.0
  3. If booting from UEFI, the DHCP Server sees that the client architecture is 7 (EFI) and sets the tag “efi-x86_64”.
  4. If the efi-x86_64 tag is set, the DHCP Server switches the boot image to bootx64.efi (otherwise, the PC boots from pxelinux.0)
  5. DHCP Server sends the response to the client
  6. (from here on, I’m following UEFI sequence) The client requests the file bootx64.efi via TFTP
  7. SecureBoot checks the signature on bootx64.efi, and it has a valid Microsoft signature
  8. The firmware then loads and runs bootx64.efi
  9. bootx64.efi looks for grubx64.efi via tftp and checks its signature.  It’s signed by Canonical, and passes the check
  10. bootx64.efi loads and runs grubx64.efi, which in turn loads the grub config from tftp
  11. Normal grub boot sequence occurs, and we’re cooking with gas

Git clone milk

The other day my wife sent me this text:

git you milk

Of course I know she meant to say “Got you milk”, but I can’t tell you how hard it was to stop myself from texting back:

git clone milk

Pushing contacts to a Motorola Razr flip-phone

After a series of no-one-cares circumstances, I found myself pulling out my boss’ old Motorola Razr flip-phone and activating it with a prepaid phone service.  That was all fine and dandy, but I had a huge list of contacts in my old phone that I really didn’t want to either type into the new phone or send one-by-one over bluetooth.  For those of you who may be wondering, my phones are CDMA, not  GSM, and therefore don’t have a SIM card to store those contacts on.

My wife had the same concern regarding her phone (which was also switched) so I did some research to see what Linux tools were available.  Both of our old phones were LG Env3’s, and it turns out they work quite nicely with a program called BitPim (it’s in the Ubuntu repos).  In no time at all I had my wife’s contact info siphoned off the phone and into a vCard file, which very easily imported into her new Android phone’s contact list right from a MicroSD card.  The Razr was going to be a different story.

BitPim (and it appears other phone tools as well) interact with the phone using specialized AT commands through its modem interface.  Unfortunately, Linux doesn’t even detect my Razr as a modem.  This is pretty easy to fix with a simple modprobe configuration file like the one below.  Don’t forget to modify the Product ID as needed so that it matches your phone (just reference the output of `lsusb` to find it out) and be sure to keep the correct case in each section as it is case sensitive  according to what I’ve read.  Note that while the sites I read named this file “motorola_razr.options”, LinuxMint (and therefore probably Ubuntu and Debian) would only load it if the extension was “.conf”.  So here’s the file:

# /etc/modprobe.d/motorola_razr.conf
alias usb:v22B8p2B44* usbserial
options usbserial vendor=0x22b8 product=0x2b44

After that, just reboot and plug the phone in; this should yield two devices: /dev/ttyUSB0 and /dev/ttyUSB1.  No one could explain why there are two, they just said use the first one (ttyUSB0).  That was the hard part.  Now we just need to get the contacts out of BitPim into the phone.

I already had my contacts from my old phone in BitPim’s phonebook, so if you haven’t done that you’ll need to do that first.  Once that’s done, we need to manually tell BitPim what phone we have as it can’t properly detect the Razr.  Here’s how it goes:

  1. In BitPim, go to Edit->Settings
  2. Set the com port to /dev/ttyUSB0
  3. I set the phone type to “V3m” (there is a V3c and a V3m – I just guessed at which I had since the battery is hard to remove)
  4. Click OK
  5. You should now be able to click the “Send Phone Data” button in the toolbar, and select the PhoneBook

For me, all the contacts went successfully and BitPim then immediately crashed.  😉  I don’t mind much since it worked anyway.  BitPim even gracefully handled the exception and allowed the program to continue after giving me the chance to view the stack trace, which is a nice touch.

Introducing: Firefox Extension Killer

At the shop we generally recommend our customers to use any browser except Internet Explorer.  This probably doesn’t come as a surprise to anyone who has spent any amount of time fixing the things people ruin on their computers or anyone who has ever wondered why their markup/css just doesn’t work right in one browser when it works right in every other browser (or why it only loofirefox_logo-only_RGBks right in one browser!).  Because of it’s ability to behave more like the browsers of yesterday when configured as such (which is EXACTLY what our older customers want) we generally prefer Firefox.  As a result, cleaning up malicious addons has become an everyday chore for us when people bring in junked-up computers.

Out of the box, cleaning up extensions in Firefox is kind of a mixed bag.  Sometimes an extension will have a remove button, other times it won’t.  Why is this?  If an addon was installed via the AddOns Manager, it will have a remove button.  If the files were installed manually from outside Firefox, there will not be a remove button.  I wanted to link to the area of the Mozilla knowledge base where they explain this decision (which I read years ago), but I can’t seem to find it anymore.  I think the gist was that since a manually installed addon could have files that aren’t tracked by Firefox, they didn’t want people to think that removing the extension in Firefox’s AddOns manager would remove all files associated with the addon.

Of course what this has led to is that most malware addons will install manually, leaving the typical user with no way of removing them from Firefox.  Mozilla has an article that explains how to remove them manually, but your average Joe is never going to be able to go through this process.  Even if one is technically inclined enough to follow the directions it’s extremely tedious to look in so many places and it’s time consuming to boot, which is why I put together the Firefox Extension Killer.

I have actually tried to write this program several times.  At first I wanted to do it in C++ because of its lack of lots of system dependencies (aside from any libraries that are used, of course).  That was a hard thing to commit to considering my hatred for the Win32 API, but I just felt like it was the best choice at the time.   I’d thought of something like C# which is much more elegant on Windows than C++, but when I first started working on the tool we were still doing a lot of Windows XP machines and they don’t all have the .Net Framework installed; I just couldn’t see having to spend 10 minutes installing .Net 3.5 to save 5 minutes of repair time.  Then it came to the development tools on Windows: either Visual Studio Express which is a huge resource hog, or a cobbled together environment trying to emulate my beloved Linux work-flow.

When that got toangry-codero cumbersome to deal with, I turned to cross-compiling from Linux to Win32.  Years ago successfully cross compiling code was similar to building a running internal combustion engine out of Lincoln Logs, but nowadays there are full tool-chain sets in the Ubuntu/Mint repos that “just work” after installation, even if the MinGW version names are unbearably confusing.  What this work-flow meant though was that testing the registry access parts of the code would be impossible in Linux.  I had the idea of writing some wrapper functions and implementing a virtual registry testing library for C++ (the perfect textbook solution) but very quickly realized that writing a library that could interact with and emulate the Windows Registry with any amount of configurability would take a LOT longer than the whole rest of Firefox Extension Killer.

After going through all this, I became very quickly disenfranchised.  I had at least gotten a CLI tool running that looked in (most of) the addon locations and just removed everything without any options.  This worked OK, since we mostly only wanted Adblock Plus installed and that was an easy reinstall.  Apart from that, I quit working on it for a year or so; it was just too painful.

Recently things have been slow at the shop, and I really started thinking about it again.  I’d piddled with a few designs on and off over time, but nothing really seemed to fit right.  The playing field is different now too; we’re pretty much working on Vista and higher now which has at least  .Net 3.0 built in.  It occurred to me that what was not an option before was the best option now: C# with Windows Forms.

My application design skills are the worst.  I do not exaggerate.  I realized recently that the designs I layout before coding are at least as bad as the stuff I come up with when I just start typing, so I installed SharpDevelop in a Windows VM and just started typing.  Two days later I had a release of Firefox Extension Killer.

So head on over to GitHub and check it out (or clone it out, as it were).  Right now I only provide a SharpDevelop project file as a means of building it, but it looks like SharpDevelop squirted out a Virtual Studio project file too.  I didn’t ask for that, but I left it in the repo anyway in case it works.  YMMV.

The thought process of Telecoms

If you’ve ever had to deal with your telecom after you’ve purchased and initially set up the services (chances are you probably have) you realize very quickly that you’re dealing with a group of people that have a very different thought process than the rest of us. In fact, I feel fortunate if I get in touch with a representative that actually has a thought process; the norm tends to be a zombie that reads from a script.

Recently the telecom I was using at home for my Internet connection, Charter Communications, raised my rate from $35 per month to $50 per month. After I managed to push my eyeballs back into their sockets upon looking at my bill, a quick conversation with the very helpful people at the local branch office (these are real people with brains and everything) revealed that I had no recourse but to make other arrangements and cancel my service. The catch, I was informed, was that I could only cancel my services by calling the support center—that’s where the zombies are.

Watch Your Brain by Maw - Zombie Crossing Signal (Zombie Silhouette remixed from

After a week or two, having worked out with my next-door neighbor to split the cost of a net connection and share it via wireless, I decided the time had come to cut Charter loose.

I decided to make the call from the shop so I wouldn’t burn up cell phone minutes in case I got put on hold. The phone call itself was actually only mildly unpleasant; I did get to speak to someone who was a native English-speaker. The only real annoyance was that after talking to the first sales person (that’s who you get when you call to cancel) who asked a set of usage questions, I was then transferred to the cancellation department who wanted to ask the same exact questions. Aside from that, after verifying my status as the true owner of the account in question by providing: first and last name, phone number, address, and account number, my account was canceled and I hung up.

Feeling rather more cheerful after the call than I was afraid I’d be, I got back to work on the laptop I’d been retrieving data from. Upon trying to access the customer’s email, I realized our net connection was down. After about 20 minutes of power cycling various things my boss broke down and called tech support…Charter tech support. That’s right, we used Charter at the shop as well. My boss was then informed that “somehow” our modem had been suspended from their end and that the problem could be corrected immediately. She made the change, our connection came back up, and we were back to business as usual.

Of course you see what happened here, yourself being a rational, thinking person: the “somehow” was that instead of taking a moment to look up the modem for my account using all the helpful information I had to rattle off, cancellation-boy just suspended the modem attached to the phone number I’d called from.

Way to make me sorry I left. Don’t get me wrong, their connection is really good and their business rates are even good. I just can’t afford $50 per month for a home Internet connection even if it is 30Mbps. As I informed their sales guy, bandwidth doesn’t buy diapers.