Tag: AI

I don’t like self-driving cars

Why do they bother me?  It’s the loss of independence.  Would you trust your self-driving 4×4 to take you from one end of the Rubicon Trail to the other arriving…

Why do they bother me?  It’s the loss of independence.  Would you trust your self-driving 4×4 to take you from one end of the Rubicon Trail to the other arriving unscratched, let alone all of the passenger alive?  OK, I know that about 99.9999% of everyone driving around in their 4WD, all-wheel drive and whatnot never intend to take them off a paved surface unless it’s a gravel parking lot at the local pumpkin patch.  And most of those cool guys with jacked up 4×4’s with 10’ lift kits haven’t a clue; they raise not only their vehicle so they can mount those massive tires, but also their center of gravity.  But since they’ll probably never drive off-road they don’t have to worry about flipping their vehicle—unless they drive over a curb.  To be honest I have to admit I have a Toyota Tacoma 4×4 with a 3” lift kit to accommodate my slightly larger than OEM tires.  We also own a mini-van and a Prius.

My “mega truck”

WAKE UP!  I know you thought this would be on a more ethereal level and not a screed about 4×4’s.  So let’s get serious.  Given the above, I believe it will be awhile before we get rid of human driven vehicles, though the vehicle in your driveway might become a collector’s item sooner than you think.  Even tractor trailers might become autonomous soon if Mercedes has its way.  But not all trucks are that easily automated.  What about dump trucks, tow trucks, ambulances and fire trucks, or trucks with wide loads?  Hey and what about motorcycles?

 

Why are these “exceptions” important?  It’s because today the accident rate for self-driving cars is twice that of non-self-driving cars and the reason is all the other vehicles on the road.  The problem is that autonomous cars follow the rules of the road and most human drivers don’t.  Autonomous cars are more likely to be rear-ended by an inattentive human than human driven vehicles.

 

Not only do autonomous cars follow the rules, but they are learning machines.  That means that as they drive they learn.  Therefore they are not programmed for every possible situation they might encounter; they learn as they go.  This complicates things a bit since the “code” changes itself as it learns.  This makes troubleshooting and reconstructing events a bit troublesome.  This will evidently lead to legal issues yet to be imagined.  Who’s at fault when an autonomous vehicle is at fault in an incident?  This has already popped up in San Francisco where a motorcyclist claimed an autonomous vehicle hit him and he’s suing GM, even though he was the one cited.

 

Clearly this technology will require a major cultural change; maybe the millennials can handle it but I don’t think I can.

Comments Off on I don’t like self-driving cars

Are you ready for when AI isn’t artificial anymore?

            Are you ready for when AI isn’t artificial anymore?I earned an MS in Computer Science back in what has been called the “AI winter.” …

 

 

 

 

 

Your brain on AI

 

Are you ready for when AI isn’t artificial anymore?
I earned an MS in Computer Science back in what has been called the “AI winter.”  That was when AI was more theoretical than practice.  Our mainframe was a Digital PDP-11 booted from paper tape in a freezing room surrounded by tape drives.  I wrote my thesis using WordStar on an Altos microcomputer powered by an 8-bit Z80 processor with 64K of memory.  It was one of the top of the line computers for its time; it even had two 5¼” floppy drives!

 

Now there’s the “world famous” IBM Watson, known for beating humans at chess, go, and Jeopardy using AI.

 

According to Wikipedia artificial intelligence (AI) is:

 

Intelligence displayed by machines, in contrast with the natural intelligence (NI) displayed by humans and other animals. In computer science AI research is defined as the study of “intelligent agents“: any device that perceives its environment and takes actions that maximize its chance of success at some goal. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

 

There are those who can hardly wait for this technology to become mature enough to implant extensions of the cloud into our brains (called hybrid thinking).  Does this sound like a good idea?  There is even the Cyborg Foundation (“IT’S TIME FOR TRANS-SPECIES TO COME OUT OF THE CLOSET”), whose stated mission is, “…to help humans become cyborgs, to promote the use of cybernetics as part of the human body and to defend cyborg rights.”  It’s bad enough that your computer can be hacked or infected with malware; imagine if your brain was.  Talk about identity theft!

 

Ironically this is exactly one of the reasons this technology is getting so much attention.  Identity assurance is a major concern in today’s workplace and everyday life.  Nymi is working on this problem, but since their solution is a band that can be removed it’s not considered the ultimate solution.  One of the proposed ultimate solutions is embedding a chip in everyone to establish their unique identity.

 

What makes AI different from other “computer programs” is that it learns and as it learns it modifies itself so it becomes better at whatever task or tasks it is assigned.  This has some interesting implications, the most important of which is that it can’t be easily audited like “normal code” can be.  The complex way AI grows and improves makes it harder to understand and even harder to control.  There are many out there like IBM’s Grady Booch who believe it is possible to “raise an AI” system to be a responsible citizen.  After thousands of years we still have a hard time raise our kids to be responsible, what make us think we can raise AI to be perfectly responsible?

 

A company, OpenAI, trained an AI to maximize its score in a virtual boat-race game, but instead of navigating the course as quickly as possible; the AI taught itself to exploit a flaw in the game by driving in circles and racking up bonus points while crashing into walls and setting itself on fire.  They say that an IA tasked with maximizing profits for a corporation—and given the seemly innocuous ability to post things on the internet—could deliberately cause political upheaval in order to benefit from a change in commodity prices.

 

Are you ready to become a cyborg?  Are you ready to turn your life over to AI?

 

Comments Off on Are you ready for when AI isn’t artificial anymore?

Type on the field below and hit Enter/Return to search