From rearing children to building robots

I came across an article today on /r/technology about a new turtle-style robot that’s being used to teach kids to code.  Though I’m not really interested in trying to get my 3 year old to code, it sounded interesting so I followed the link to give it a look.  The opening line of the very first paragraph of the article absolutely stopped me in my tracks; I couldn’t believe that our world had gotten so borked up that someone would actually say these words and be OK with them:

In order to build new human children who can compete in tomorrow’s post-work world, we must teach kids to code. Everyone agrees, even Obama…

Build human children?  There is so much messed up in that statement it’s hard to say where to begin.  Even if you get passed that, the next thing we get hit with is this mythical “post-work” world that’s supposedly going to happen.

I’m neither a philosopher nor a philosopher’s son, but lets just break these statements down a bit beginning with the first.

To start, the phrase “human children” not only sounds weird, but it’s a bit ominous if you peel back the tech-journalist-flair.  None of the major dictionaries I consulted, from modern dictionaries all the way back to the venerable Webster’s 1828, gave room for children to be anything but human.  To use the phrase “human children” specifically then infers that there is some other type of child, or at the very least that there is room for debate regarding who qualifies as a human child.  This could reference the abortionist doctrine that one doesn’t become human until birth (or some other arbitrary point after conception), but the fact that they want to “build” human children reveals that it’s much more likely that they refer to the possibility of a non-human child via artificial intelligence, the thought that if we can manage to create an intelligence comparable to our own in robots and computers, then we can finally prove that we ourselves are simply molecular machines no different than they would be.

The thought of artificial intelligence has both intrigued and frightened mankind for some time now, but until recently it was something that we watched on Star Trek and was never going to happen in real life.  I still don’t believe it will truly happen, not in the sense of actually achieving human-levels of sentience, but recent attempts have been good enough to cause even some science-guys to balk at the idea.  You would think that Stephen Hawkins of all people would be thrilled at the chance to finally put us religious creationist nuts in our place by proving that God is totally unnecessary to create human-like intelligence, but he is smart enough to realize that if we were smart enough to become god, there would be nothing to stop our creation from becoming god and pushing us out of the way.  He’s not alone; a quick web search will show you that Bill Gates and Elon Musk agree.

Now I’m not one of these folks who is scared to death of the Terminator coming to get me; on the contrary, I don’t have that much confidence in man’s abilities.  What it does show us though is the mindset that these folks have: we as humans are not special.  We are just blobs of protein bumbling about a self-created universe with no reason to exist or not exist.  While it may not be immediately obvious why this is a problem, it becomes apparent when we consider that these individuals and the companies they represent have a strong, and even sometimes direct, influence on society and specifically the public education American children receive.

Consider Common Core: if you don’t know what it is then you won’t have to look far to find very strong opinions one way or another on it.  Regardless of where you fall on the hate-it-love-it spectrum, try to put aside the opinions and emotions for just a moment to consider a few facts:

  1. The Bill and Melinda Gates foundation funded the development of Common Core, and also provided political influence to get it adopted.
  2. One of the major talking points in promoting Common Core has been the promotion of S.T.E.M. (Science, Technology, Engineering, and Math) education.
  3. One of the complaints about Common Core is emphasis of the process over results. The linked article here is actually trying to defend Common Core, so you get both sides of this argument.
  4. History in general and US History specifically (particularly anything that a reasonable person would perceive as positive) seem to be de-emphasized or omitted in many places.  Honestly this is just one article of many that demonstrate this.  Spend some time reading blog posts on this topic; you can’t really rely on one random blog post for accurate news but when you have them popping up all over the place complaining about the same things, you start to think there’s something to it.

Now, for an analysis of what I see in these facts.  First off, it would be asinine to think that Bill Gates paid for all this stuff and he didn’t inject his thoughts and priorities into the process.  Especially if STEM is a focus, it would make sense to have one of the industry leaders guide the process of training folks to enter the field, right?  Now lets look at the next point: no matter how you interpret the argument presented in the article, I still come away with the feeling that they are encouraging the students to trust the process with less regard for the results/consequences.  As in, “shut up and do what you’re told.”  I know it’s subtle, but I can’t help but see it against the backdrop of everything else Common Core appears to be standing for.  Moving on to the last point: if we don’t know where we’ve come from, how we got here, the sacrifices it took, the passions of the men who dreamed of something better than a tyrannical government and oppressive nobles, what would be left for the young people to be passionate about now?  If you’ve ever read “A Brave New World” you see that one of the secrets to controlling people is controlling passion: without it visionaries become dull and indifferent, patriots become cogs in political machines.  Not only that, but we know what wisdom tells us becomes of folks who do not learn from history.  We see a great movement in our nation right now, particularly among the youth, toward a Socialist government.  They are no longer being taught about the epic failures of socialism in the past, nor indeed the consequences in terms of loss of liberty in current socialist-aligned governments abroad.

This is what I see coming from men like this, whose influence is ever greater upon the next generation whether by the narcotic effect of constant connection or the repressive education schemes they’ve come up with: you are nothing more than a machine that does the bidding of the elite.

That brings me to the second part of the quote from the Verge: the “post-work” world.  A lot of folks have been duped into accepting all this by means of the promise of a tomorrow where we’ll never have to work in the traditional sense.  We’ll sit around all day while Rosie vacuums and Rudy shows us whatever entertainment we fancy at the time, and we’ll sip a latte brought to us by yet another of the army of robotic household servants.  The sun will power it all for free, and there’ll even be other robots that repair the household robots when they break.  There is nothing realistic about this but people seem to be willing to justify giving in to the pressure for even such an out-of-reach goal.  Here’s what scripture says about it:

Genesis 3:17-19
17 And unto Adam he said, Because thou hast hearkened unto the voice of thy wife, and hast eaten of the tree, of which I commanded thee, saying, Thou shalt not eat of it: cursed is the ground for thy sake; in sorrow shalt thou eat of it all the days of thy life;
18 Thorns also and thistles shall it bring forth to thee; and thou shalt eat the herb of the field;
19 In the sweat of thy face shalt thou eat bread, till thou return unto the ground; for out of it wast thou taken: for dust thou art, and unto dust shalt thou return.

Scripture guarantees us that we will not be liberated from work until all things are made new again.  Not only that, but if the technocrats have their way and get to “build human children” according to their whims, do we really think they’ll let us be equal with them?  They talk about equality all the time, but what they really mean is that we will all be equal in subservience: subservience through dependence.  What we are doing with our current technology and technocrat driven education system is creating a generation that is so dependent that if the system ever failed, death would be rampant.  How many young people know how to raise crops?  Dress wild game?  Use an axe?  Or even sew by hand?  What about cooking a simple meal?

I’m not talking about facing Hollywood garbage like zombies and shark-infested tornadoes; we have enemies right now that could cripple the US’s entire technological system by detonating a nuclear weapon high in the atmosphere, creating an incredibly strong EMP.  That’s totally ignoring possible natural catastrophes like major solar flares whose effects are at this point only the subject of speculation.

So when did we go from rearing children to building biological robots?

Let’s wrap up with some scripture that instructs us in the most important aspect of proper child rearing, to be complimented with applicable life skills:

Deuteronomy 6:4-9
4 Hear, O Israel: The LORD our God is one LORD:
5 And thou shalt love the LORD thy God with all thine heart, and with all thy soul, and with all thy might.
6 And these words, which I command thee this day, shall be in thine heart:
7 And thou shalt teach them diligently unto thy children, and shalt talk of them when thou sittest in thine house, and when thou walkest by the way, and when thou liest down, and when thou risest up.
8 And thou shalt bind them for a sign upon thine hand, and they shall be as frontlets between thine eyes.
9 And thou shalt write them upon the posts of thy house, and on thy gates.

Git clone milk

The other day my wife sent me this text:

git you milk

Of course I know she meant to say “Got you milk”, but I can’t tell you how hard it was to stop myself from texting back:

git clone milk

Pushing contacts to a Motorola Razr flip-phone

After a series of no-one-cares circumstances, I found myself pulling out my boss’ old Motorola Razr flip-phone and activating it with a prepaid phone service.  That was all fine and dandy, but I had a huge list of contacts in my old phone that I really didn’t want to either type into the new phone or send one-by-one over bluetooth.  For those of you who may be wondering, my phones are CDMA, not  GSM, and therefore don’t have a SIM card to store those contacts on.

My wife had the same concern regarding her phone (which was also switched) so I did some research to see what Linux tools were available.  Both of our old phones were LG Env3’s, and it turns out they work quite nicely with a program called BitPim (it’s in the Ubuntu repos).  In no time at all I had my wife’s contact info siphoned off the phone and into a vCard file, which very easily imported into her new Android phone’s contact list right from a MicroSD card.  The Razr was going to be a different story.

BitPim (and it appears other phone tools as well) interact with the phone using specialized AT commands through its modem interface.  Unfortunately, Linux doesn’t even detect my Razr as a modem.  This is pretty easy to fix with a simple modprobe configuration file like the one below.  Don’t forget to modify the Product ID as needed so that it matches your phone (just reference the output of `lsusb` to find it out) and be sure to keep the correct case in each section as it is case sensitive  according to what I’ve read.  Note that while the sites I read named this file “motorola_razr.options”, LinuxMint (and therefore probably Ubuntu and Debian) would only load it if the extension was “.conf”.  So here’s the file:

# /etc/modprobe.d/motorola_razr.conf
alias usb:v22B8p2B44* usbserial
options usbserial vendor=0x22b8 product=0x2b44

After that, just reboot and plug the phone in; this should yield two devices: /dev/ttyUSB0 and /dev/ttyUSB1.  No one could explain why there are two, they just said use the first one (ttyUSB0).  That was the hard part.  Now we just need to get the contacts out of BitPim into the phone.

I already had my contacts from my old phone in BitPim’s phonebook, so if you haven’t done that you’ll need to do that first.  Once that’s done, we need to manually tell BitPim what phone we have as it can’t properly detect the Razr.  Here’s how it goes:

  1. In BitPim, go to Edit->Settings
  2. Set the com port to /dev/ttyUSB0
  3. I set the phone type to “V3m” (there is a V3c and a V3m – I just guessed at which I had since the battery is hard to remove)
  4. Click OK
  5. You should now be able to click the “Send Phone Data” button in the toolbar, and select the PhoneBook

For me, all the contacts went successfully and BitPim then immediately crashed.  😉  I don’t mind much since it worked anyway.  BitPim even gracefully handled the exception and allowed the program to continue after giving me the chance to view the stack trace, which is a nice touch.

Tough little [not a] malware infection

Here at the shop we’ve been working with a customer recently who has been plagued with redirects.  Several of his computers had malware of various types, which we removed.  The problem was that he still had redirects when he got his computers back.  We’d sit and search for 15 or 20 minutes here at the shop (they were Google search redirects) and never see the first redirect, but he’d get them at his place.

At first we thought we were running into a type of malware that we hadn’t seen before and that none of our tools were detecting, but the longer we worked the surer we were that the computers were totally clean.  After fruitless hours of running scans which turned up nothing we came to the realization that we were up against something a little stranger than a piece of malware, and the idea that we couldn’t duplicate the redirects here at the shop kept cropping up in our discussions.  One of the other guys here at the shop had the idea that the bug might only rear it’s ugly head on certain ISP’s networks, but we called the customer back and found that they were using the same ISP as us.  He was on to something though…the common denominator was his network.

Then I remembered.

Not too long ago I’d read online that a lot of the older Linksys routers were vulnerable to remote attacks in which settings could be maliciously altered without the user’s knowledge.  I immediately called the customer and, sure enough, they were using an older Linksys router.

This morning I went to the customer’s place of business and tested the theory.  There in his router was the whole problem…static DNS entries that most certainly did NOT point to his ISP’s DNS servers.  Either by remote attack  or through a malware infection already installed on a PC in his network, someone had hijacked his DNS entries and kept his PC’s redirecting and getting infected.

I didn’t know all the details of the vulnerability, so I replaced the router completely.  It was a WRT54G v2, so it was really old anyway.

So, now we no longer have to pull out our hair and our customer can get back to business as usual.  Crazy days.

Nook: a few weeks later

We’ll, the network problems with my Nook subsided completely after three days or so.  After that, it was very enjoyable.  I’ve been reading a free Google Books version of Moby Dick on it, and even in low-light conditions it is really easy on my eyes.  So far it seems to be just as good on battery life as every has told me as well; I’ve probably logged 5 hours or so on it so far and the battery meter is still looking healthy.

I don’t get as much time to read as I’d like, but now I can do it on the go (like this past weekend’s trip to Cincinatti)  even easier than I could before.  Now if we can just get publishers like O’Reilly to offer their ebooks for fair prices…  🙂

I got a Nook!

Well, I got a Nook for Christmas and I really love the way it looks. The interface and chassis design are very sleek, and it only took a little practice to get used to the touch screen. I asked for the wifi only version of the original Nook, and I’m really pleased with my choice. I chose the wifi only model because I don’t plan on sitting around all day buying books. There was no reason not to save the money by limiting myself to areas with available wifi access when it comes to book purchasing. I wanted the original Nook as opposed to the color not only because of the huge price difference, but I really like the E-ink screen; I look at a computer screen all day most days and it is so refreshing to read on a screen that looks as much like paper as the Nook does. I don’t feel any more eye strain looking at that than I do looking at a printed page.

So far, the only problem I’ve had with it is that it seems B&N’s network is overloaded. Apparently, they either did not expect or did not care to prepare for the surge in traffic resulting from people firing up their brand new Nook’s on Christmas day…and the day after.  This wasn’t a very wise approach, since I think a lot of people don’t have a concept of what’s happening and might return their Nooks, expecting that they’ll always operate so spottily.

We had a white Christmas this year for the first time since 1947 (no kidding, we’re snowed in), and I’ve had a bit of extra time to play with my new gadget, but with the B&N network more-or-less down there’s not much I can do yet. 😦

First stab at pseudo-code

I’ll be the first one to tell you that I am horrible at planning/designing coding projects.  I think it’s just because I don’t like it at all, and I have a hard time making myself do it.  I love to write code, but the prerequisite work can be such a drag.  Because of this, it probably takes me a lot longer to get work done than it should.  Recently, I’ve been trying to remedy all this.  I’ve been trying to make myself do a bit of planning for the calendar project.

A while back I started reading Code Complete—I don’t get a lot of reading time, so I’m only halfway through, but I’ve learned a LOT in what I’ve already read.  A good chunk of the beginning chapters are about planning and design, and one particular tool really stuck out at me at first: pseudo code.

To be honest, the reason it stuck out to me initially was that I thought it was horribly silly.  I thought, “Why would I want to write fake code, when I could just as easily write the real thing?”  Well, the more experience I gain, the more I realize that I can’t just as easily write the real thing.  Getting the exact syntax of the code correct and finding the optimum way of accomplishing a goal in a particular language can be quite time consuming, and I’ve found that it gets in the way of focusing on the problem itself.  Because of that, I’ve truly begun to see the efficacy of pseudo code.

I’m still experimenting with how I want the syntax and just how much detail to go into, but I can already see how it’s helping me focus on figuring out what needs to go where and what information I need access to in various places.  The latter is where I’ve had the most trouble in the past; too often has a lack of planning led me to paint myself into a corner where I’m missing a crucial piece of data and my design just won’t let me get at it.  In Code Complete, McConnell shows us the gravity of our design mistakes compared to when they are discovered, and I definitely see what he means.

The way I’ve done this project so far seems to have gone well.  Going by the advice of Getting Real, I built a semi-functional mockup of the GUI first.  The idea is that this will let me know exactly what my back-end code needs to do and (more importantly) what it doesn’t need to do.  I haven’t finished reading that book either, but what I’ve read so far has really given me a different perspective on how I approach a project, since it’s written with small development teams in mind.  Is a one man team considered small?  This one’s a free read online, so check it out.

Anyway, with the GUI  mockup done, I started the pseudo code.  The nice thing about pseudo code is that you can lay it out however you want.  You can “code” to any level of detail you feel necessary.  McConnell recommends writing pseudo code to such a level that each line of pseudo code turns into one line of real code; I’m not going near that far in some spots, but every bit that far in others.  We’ll see how it goes!

I realize that pseudo code is only one planning tool among the several that I should be using, but I’m working my way up.  I think the next thing I want to tackle is UML.  I’ve tried using it before, but I found that not really knowing it well ahead of time made it very cumbersome.  If I can ever snag some time to study up on it, I think it’ll be a really great tool as well.