AI BS

or Artificial Intelligence Bull Shitake

There are a lot of claims being made, and as this article points out, not many of them are supported by strong evidence/math.

In Rebooting AI, Ernie Davis and I made six recommendations, each geared towards how readers – and journalists – and researchers might equally assess each new result that they achieve, asking the same set of questions in a limit section in the discussion of their papers:


Stripping away the rhetoric, what does the AI system actually do? Does a “reading system” really read?


How general is the result? (Could a driving system that works in Phoenix work as well in Mumbai? Would a Rubik’s cube system work in opening bottles? How much retraining would be required?)


Is there a demo where interested readers can probe for themselves?


If AI system is allegedly better than humans, then which humans, and how much better? (A comparison is low wage workers with little incentive to do well may not truly probe the limits of human ability)


How far does succeeding at the particular task actually take us toward building genuine AI?


How robust is the system? Could it work just as well with other data sets, without massive amounts of retraining? AlphaGo works fine on a 19×19 board, but would need to be retrained to play on a rectangular board; the lack of transfer is telling.

OpenCV Speed Cam

I really need to find the time to build this DIY speed cam. From my home office window, I have an excellent view of an intersection where I would estimate about 70% of the cars don’t even stop at the posted Stop sign. Further, I would guess that close to 90% of them are going faster than the 25 MPH speed limit. Data is good.

Computer vision itself isn’t anything new, but it has only recently reached a point where it’s practical for hobbyists to utilize. Part of that is because hardware has improved dramatically in recent years, but it also helps that good open-source machine learning and computer vision software has become available. More software options are becoming available, but OpenCV is one that has been around for a while now and is still one of the most popular. Over on PyImageSearch, Adrian Rosebrock has put together a tutorial that will walk you through how to detect vehicles and then track them to estimate the speed at which they’re traveling.

Rosebrock’s guide will show you how to make your very own DIY speed camera. But even if that isn’t something you have a need for, the tutorial is worth following just to learn some useful computer vision techniques. You could, for instance, modify this setup to count how many cars enter and exit a parking lot. This can be done with affordable and readily-available hardware, so the barrier to entry is low — perfect for the kind of project that is more of a learning experience than anything else.

Problems with AI Transparency

As more and more business decisions get handed over (sometime blindly) to computer algorithms (aka ‘AI’), companies are very late to the game in considering what the consequences of that delegation will yield. As a buffer against these consequences, a company may want to be more transparent about how it’s algorithms work but that is not without it’s challenges.

To start, companies attempting to utilize artificial intelligence need to recognize that there are costs associated with transparency. This is not, of course, to suggest that transparency isn’t worth achieving, simply that it also poses downsides that need to be fully understood. These costs should be incorporated into a broader risk model that governs how to engage with explainable models and the extent to which
information about the model is available to others.

Second, organizations must also recognize that security is becoming an increasing concern in the world of AI. As AI is adopted more widely, more security vulnerabilities and bugs will surely be discovered, as my colleagues and I at the Future of Privacy Forum recently argued. Indeed, security may be one of the biggest long-term barriers to the adoption of AI.

Reading Is Good For Your Brain

Science has found that reading is essential for a healthy brain. We already know reading is good for children’s developing noggins: A study of twins at the University of California at Berkeley found that kids who started reading at an earlier age went on to perform better on certain intelligence tests, such as analyses of their vocabulary size.


Other studies show that reading continues to develop the brains of adults. One 2012 Stanford University study, where people read passages of Jane Austen while inside an MRI, indicates that different types of reading exercise different parts of your brain. As you get older, another study suggests, reading might help slow down or even halt cognitive decline.Science has found that reading is essential for a healthy brain. We already know reading is good for children’s developing noggins: A study of twins at the University of California at Berkeley found that kids who started reading at an earlier age went on to perform better on certain intelligence tests, such as analyses of their vocabulary size.


Other studies show that reading continues to develop the brains of adults. One 2012 Stanford University study, where people read passages of Jane Austen while inside an MRI, indicates that different types of reading exercise different parts of your brain. As you get older, another study suggests, reading might help slow down or even halt cognitive decline.

https://www.popsci.com/read-more-books

 

And it doesn’t seem to matter if it is a physical book, an e-reader or an audio book (although the audio book has a slightly different impact on the brain).

 

As for audiobooks, the research so far has found that they stimulate the brain just as deeply as black-and-white pages, although they affect your gray matter somewhat differently. Because you’re listening to a story, you’re using different methods to decode and comprehend it. With print books, you need to provide the voice, called the prosody—you’re imagining the “tune and rhythm of speech,” the intonation, the stress on certain syllables, and so. With audio, the voice actor provides that information for you, so your brain isn’t generating the prosody itself, but rather working to understand the prosody in your ears.

Clever ‘AI’ or Poor Definition?

These types of articles seem to come down to the insatiable need for writers to sensationalize things that they don’t necessarily understand.

For example, in the scenario outlined in the article, it is unlikely that the ‘AI’ (aka computer algorithm) was self aware and said to itself “hey, I have a comprehensive understanding of humans and their capabilities, so I will modify myself to ‘cheat’ at this task in a way that a human would find difficult to detect”.

More likely is that the algorithm was poorly defined and the brute force computational model (aka ‘AI’) found a way to ‘solve’ the problem in a way that wasn’t contemplated by the software developer.

This clever AI hid data from its creators to cheat at its appointed task

Feed Shark

flickr OFF

Joined 2005… Left 2018…

I knew that flickr has been on the decline for a while.  IMHO, Yahoo’s acquisition was the beginning of the end.  SmugMug’s heavy handed idiocy of late was the last straw for me.

After a few arrogant email demands from SmarmMug, I had had enough so I requested all of my data from flickr and it only took them a week and a half to provide the requested files.  I happily downloaded my content and deleted my account after 13 years of use.

Personal Data as an Asset

There is a well worn axiom in business that ‘data should be treated as a corporate asset’.  This is, of course, very true and the advances in data science and ‘big data’ are giving the potential for that data to become even more valuable.

This got me thinking about how personal data should be thought about in the same way.  Think about all the data generated from what you watch, what you listen to, where you visit, what you review, data from wearables, etc.  All of this data is consumed and analyzed by 3rd parties currently, but what if individuals were able to take control of, what is, after all, their data.

Would this give rise to data science companies marketing algorithms directly to consumers (much like pharmaceutical companies market drugs directly)?  Could it also give rise to the equivalent ‘data quackery’ similar to the natural supplements and homeopathic industry?  That is, junk algorithms that, at their most benign, do no harm and at their worst incent you to dangerous courses of action?

Would there also be a new industry for ‘personal data scientists’ (like financial councilors or tax advisers) that would help you assess all of the data assets you have and how to best combine or leverage them with third parties to your best benefit (and not just the benefit of 3rd parties)?  Wouldn’t it be great to have some control over the hundreds of arbitrage-like transactions that go on behind the scenes when you are waiting for a page to load on a commercial web site via browser setting that allow you to control what information about you gets shared (and with companies).

Teaching AI To Be ‘Smarter’ By Doubting Itself

Interesting post that suggests that in deep learning algorithms, questioning things may lead to higher quality conclusions.

Researchers at Uber and Google are working on modifications to the two most popular deep-learning frameworks that will enable them to handle probability. This will provide a way for the smartest AI programs to measure their confidence in a prediction or a decision—essentially, to know when they should doubt themselves.

Deep learning, which involves feeding example data to a large and powerful neural network, has been an enormous success over the past few years, enabling machines to recognize objects in images or transcribe speech almost perfectly. But it requires lots of training data and computing power, and it can be surprisingly brittle.

Somewhat counterintuitively, this self-doubt offers one fix. The new approach could be useful in critical scenarios involving self-driving cars and other autonomous machines.

“You would like a system that gives you a measure of how certain it is,” says Dustin Tran, who is working on this problem at Google. “If a self-driving car doesn’t know its level of uncertainty, it can make a fatal error, and that can be catastrophic.”

What Does An Enterprise Architect Do?

‘Enterprise Architect’ is a very fashionable title these days which causes a bit of confusion (and consternation) for actual EA practitioners. Typically, this title is attached to the role of someone who has deep technical knowledge about a given technology/application/suite. This is not an Enterprise Architect.

This article does a great job of clarifying what Enterprise Architecture is and what an Architect does (or should do).

“Before answering that question, it is important to note that no architecture is a solution. Often people confuse a solution, such as corporate infrastructure, as the architecture. This is an all too common mistake and very misleading. An architecture guided the development of the infrastructure, the infrastructure is a solution – not the architecture.

“The architect’s role isn’t to create solutions. Rather the architect’s role is to inform decision-makers and guide development of solutions based on understanding business drivers and needs of the organization. Although a single person can have both a role as an architect and a developer. The architect typically takes a broader and material independent view than the developer, yet leaves much downstream detail to the development community.

“So, since architecture is not a solution what is it? It is a package of information that describes what is needed to achieve a given result and what it might look like in a future state if implemented. In order for an architecture to be effective, that is for it to be realized in solutions, it must guide decisions.

“Any good architecture addresses a well-defined scope and seeks to achieve specified goals. For example, an architecture for a back-office software suite will seek to enable improvements to back office operations, an architecture for a department network will enable department interconnectivity, an architecture for corporate infrastructure will address needed services throughout at lower costs, etc. For each scope there are decision-makers that can either accept or reject the guidance from the architect such as office managers, network managers, the head of IT, etc.

“Those that deal with Enterprise Architecture take the broadest view, deal with issues that are oftentimes beyond even the corporate level, and are most effective when they influence corporate or Board level decision-makers.

Unpredictions for Artificial Intelligence (AI)

This post is a refreshing counterpoint to the breathless ‘AI will take over everything’ reporting that is increasingly common of late.

Self-driving cars
The first area is that “we won’t be riding in self-driving cars”. As Dr. Reddy explains: “While many are predicting a driverless future, we’re a long ‘road’ away from autonomous vehicles.” This is is terms of cars that will take commuters to work, a situation where the commuter can sit back and read his or her iPad while paying little attention to the traffic outside.
He adds: “For a number of years ahead, human operators and oversight will still rule the roads, because the discrete human judgments that are essential while driving will still require a person with all of his or her faculties — and the attendant liability for when mistakes happen. Besides technical challenges, humans tend to be more forgiving about mistakes made by human intelligence as opposed to those made by artificial intelligence.”
Disappearing jobs
The second ‘unprediction’ is that people will not be replaced by AI bots this year. Dr. Reddy states: “While it is possible that artificial intelligence agents might replace (but more likely supplement) certain administrative tasks, the reality is that worker displacement by AI is over-hyped and unlikely.” So robots won’t be taking over most jobs any time soon.
This is because, the analyst states: “Even in an environment where Automated Machine Learning is helping machines to build machines through deep learning, the really complex aspects of jobs will not be replaced. Thus, while AI will help automate various tasks that mostly we don’t want to do anyway, we’ll still need the human knowledge workers for thinking, judgment and creativity. But, routine tasks beware: AI is coming for you!”
Medical diagnosis
The third aspect is that we won’t get AI-powered medical diagnoses. This is, Dr. Reddy says “Due to a lack of training data and continued challenges around learning diagnosis and prognosis decision-making through identifying patterns, AI algorithms are not very good at medical decision automation and will only be used on a limited basis to support but not replace diagnosis and treatment recommendations by humans.”
He adds: “AI will be increasingly deployed against sporadic research needs in the medical arena, but, as with fraud detection, pattern recognition by machines only goes so far, and human insight, ingenuity and judgment come into play. People are still better than machines at learning patterns and developing intuition about new approaches.”
Importantly: “People are still better than machines at learning patterns and developing intuition about new approaches.”
AI at work
The fourth and final area is that we will still struggle with determining where artificial intelligence should be deployed. Dr. Reddy states: “Despite what you might be hearing from AI solution vendors, businesses that want to adopt AI must first conduct a careful needs assessment. As part of this process, companies also must gain a realistic view of what benefits are being sought and how AI can be strategically deployed for maximum benefit.”
The analyst adds: “IT management, business users and developers should avoid being overly ambitious and carefully assess the infrastructure and data required to drive value from AI. Best practices and ‘buy versus build’ analysis also should be part of the conversations about implementing AI applications.”