Artificial Intelligence Rule #7: Close Enough

#closeenough

Rule #7 of artificialintelligence: close enough.

Number 7 is pretty far down the list, but “close enough” is an equally important concept for AI. Have you ever queried Google or Bing and gotten a single entry? Aka “the answer” to your question? No. I know I’ve gotten a single page of items in return (I ask some weird questions) but it always provides a menu of options.

The page ranking algorithms of Google are legendary and as closely guarded as military secrets. They aren’t carved in stone. Indeed the algorithms are manipulated in order to adjust for specific hacks as well as smoothing trends.

But artificial intelligence doesn’t provide solutions like an algebraic math problem. It’s stoic in its reply, showing no emotion and yet posing a voluminous suite of possibilities to be considered by the inquirer. Indifferent to the vicissitudes of fortune, the ai sweeps the oceans of the internet to provide you what is . . . close enough.

Humans calculate those algorithms and only you decide what is “the answer.”

#machinelearning #aibots #algorithms #aibot #deeplearning #ml

#igotnothing


Rule #4 of Artificial Intelligence: no context.  

#AIbots have been trained to do many mundane, repetitive jobs. Training involves utilizing data sets with often millions of data points. Without enthusiasm or angst, the AI ingests these volumes of data and returns outcomes by direction of its creator.  Given x-ray images of healthy and diseased lungs, we have to tell the AI which is good and bad. AIs are “rewarded” for correctly identifying the difference.

But an AI doesn’t know what a lung is, how it works, that it actually exists inside a person or what a person is. 

#Google was the first major developer in #recognition when it looped together 16,000 computer processors with one billion connections in order for it to watch #YouTube and find . . . cats.  This was a tremendous breakthrough for 2012, but isn’t this what a 5 year old understands without seeing millions of videos?

And a toddler knows colors and textures and if the cat is missing an eye or leg or tail. He or she knows cats are pets and most are found in or around homes. Cats walk on four legs; they don’t swim. Not everyone likes cats but they are loveable. 

This is context and the richness of a five year old’s perception outweighs a million data points.

#dontstop

#thatsnotwhatyouasked

Pet peeve – use of “literally.”

If you don’t understand literally vice figuratively, #artificialintelligence can set you straight. Rule#2 of #AI is everything is literal. AI does exactly as you tell it.  That can be annoying from a child or spouse or customer service chatbot. AI doesn’t have the contextual preferences of humans – which emotes angst and joy in the uncovering. Given a problem, AI is going to take the tasking literally. For example:

I hooked a neural network up to my Roomba. I wanted it to learn to navigate without bumping into things, so I set up a reward scheme to encourage speed and discourage hitting the bumper sensors. It learnt to drive backwards, because there are no bumpers on the back. – @smingleigh

This is an interesting concept because the “bugs” that you deal with your computer, your phone, your network, your business are likely a synergy of literal translation. Code knows 0 or 1, and coders get that.  The rest of us are swimming in “why the hell is this broken” when the answer is a literal question return.

#saywhatyoumean #meanwhatyousay

AI Bots on Billie Eilish

#Imabiscuit #billieeilish

In my last post, i talked about how #artificialintelligence is NOT the super borg/being that could take over the world. So why not?  #AI is #machinelearning and we are the teachers. 

One of the earliest and prolific examples is Google translate.  Instead of using rules based learning: vocabulary + grammar = new language, AI consumes the Internet’s volumes of translations online, eating everything, idioms and nuances.  The human level equivalent for a single language would be blind, total immersion. Go to a country knowing nothing of the language and simply listen, read and repeat, making mistakes along the way. 

Humans have to train the data too, teaching it right from wrong.  #AIbots don’t know – and don’t care – what the output is – as your text auto completion can attest. What comes out as a result is sometimes odd and sometimes beautiful – kinda like its human creators.

If you’re thinking wow, how cool, and wouldn’t something that can learn language better than we can take over the world? Not so much. The #trialanderror is as large as the training data set (100 billion translations as of 2016).  The results of a single translation are a lottery ticket sample size. #youtube abounds with examples of how #googletranslate doesn’t quite figure it out.

Check out “I’m a biscuit.” Better known as #badguy