So we have gone from conflict diamonds to conflict avocados? I guess it proves out the old adage “if something is worth doing, it is worth doing for money”. Sadly.
Automated processes and workflows aren’t AI, but everyone wants to be a ‘player’ in the Artificial Intelligence space.
As I recently observed, most ‘AI’ purveyors seem to have the attitude “You can’t buy AI, but I can sell it to you!”.
Constraints can be a good creative tension to help temper and refine innovation. Just don’t have ‘too many’ constraints.
This is the stuff of sci-fi starting to come to life. It will be interesting to see how this develops over time.
Science has found that reading is essential for a healthy brain. We already know reading is good for childrenâ€™s developing noggins: AÂ study of twins at the University of California at BerkeleyÂ found that kids who started reading at an earlier age went on to perform better on certain intelligence tests, such as analyses of their vocabulary size.
Other studies show that reading continues to develop the brains of adults. OneÂ 2012 Stanford University study, where people read passages of Jane Austen while inside an MRI, indicates that different types of reading exercise different parts of your brain. As you get older, another study suggests, reading might helpÂ slow down or even halt cognitive decline.Science has found that reading is essential for a healthy brain. We already know reading is good for childrenâ€™s developing noggins: A study of twins at the University of California at Berkeley found that kids who started reading at an earlier age went on to perform better on certain intelligence tests, such as analyses of their vocabulary size.
Other studies show that reading continues to develop the brains of adults. One 2012 Stanford University study, where people read passages of Jane Austen while inside an MRI, indicates that different types of reading exercise different parts of your brain. As you get older, another study suggests, reading might help slow down or even halt cognitive decline.
And it doesn’t seem to matter if it is a physical book, an e-reader or an audio book (although the audio book has a slightly different impact on the brain).
As for audiobooks, the research so far has found that theyÂ stimulate the brain just as deeplyÂ as black-and-white pages, although they affect your gray matter somewhat differently. Because youâ€™re listening to a story, youâ€™re using different methods to decode and comprehend it. With print books, you need to provide the voice, called the prosodyâ€”youâ€™re imagining the â€œtune and rhythm of speech,â€ the intonation, the stress on certain syllables, and so. With audio, the voice actor provides that information for you, so your brain isnâ€™t generating the prosody itself, but ratherÂ working to understand the prosody in your ears.
These types of articles seem to come down to the insatiable need for writers to sensationalize things that they don’t necessarily understand.
For example, in the scenario outlined in the article, it is unlikely that the ‘AI’ (aka computer algorithm) was self aware and said to itself “hey, I have a comprehensive understanding of humans and their capabilities, so I will modify myself to ‘cheat’ at this task in a way that a human would find difficult to detect”.
More likely is that the algorithm was poorly defined and the brute force computational model (aka ‘AI’) found a way to ‘solve’ the problem in a way that wasn’t contemplated by the software developer.
Here is to a better 2019!
I knew that flickr has been on the decline for a while.Â IMHO, Yahoo’s acquisition was the beginning of the end.Â SmugMug’s heavy handed idiocy of late was the last straw for me.
After a few arrogant email demands from SmarmMug, I had had enough so I requested all of my data from flickr and it only took them a week and a half to provide the requested files.Â I happily downloaded my content and deleted my account after 13 years of use.
There is a well worn axiom in business that ‘data should be treated as a corporate asset’.Â This is, of course, very true and the advances in data science and ‘big data’ are giving the potential for that data to become even more valuable.
This got me thinking about how personal data should be thought about in the same way.Â Think about all the data generated from what you watch, what you listen to, where you visit, what you review, data from wearables, etc.Â All of this data is consumed and analyzed by 3rd parties currently, but what if individuals were able to take control of, what is, after all, their data.
Would this give rise to data science companies marketing algorithms directly to consumers (much like pharmaceutical companies market drugs directly)?Â Could it also give rise to the equivalent ‘data quackery’ similar to the natural supplements and homeopathic industry?Â That is, junk algorithms that, at their most benign, do no harm and at their worst incent you to dangerous courses of action?
Would there also be a new industry for ‘personal data scientists’ (like financial councilors or tax advisers) that would help you assess all of the data assets you have and how to best combine or leverage them with third parties to your best benefit (and not just the benefit of 3rd parties)?Â Wouldn’t it be great to have some control over the hundreds of arbitrage-like transactions that go on behind the scenes when you are waiting for a page to load on a commercial web site via browser setting that allow you to control what information about you gets shared (and with companies).
For me, RSS never really went away, as as my Feedly app convincingly proves.
Interesting post that suggests that in deep learning algorithms, questioning things may lead to higher quality conclusions.
Researchers at Uber and Google are working on modifications to the two most popular deep-learning frameworks that will enable them to handle probability. This will provide a way for the smartest AI programs to measure their confidence in a prediction or a decisionâ€”essentially, to know when they should doubt themselves.
Deep learning, which involves feeding example data to a large and powerful neural network, has been an enormous success over the past few years, enabling machines to recognize objects in images or transcribe speech almost perfectly. But it requires lots of training data and computing power, and it can be surprisingly brittle.
Somewhat counterintuitively, this self-doubt offers one fix. The new approach could be useful in critical scenarios involving self-driving cars and other autonomous machines.
â€œYou would like a system that gives you a measure of how certain it is,â€ says Dustin Tran, who is working on this problem at Google. â€œIf a self-driving car doesnâ€™t know its level of uncertainty, it can make a fatal error, and that can be catastrophic.â€
‘Enterprise Architect’ is a very fashionable title these days which causes a bit of confusion (and consternation) for actual EA practitioners. Typically, this title is attached to the role of someone who has deep technical knowledge about a given technology/application/suite. This is not an Enterprise Architect.
This article does a great job of clarifying what Enterprise Architecture is and what an Architect does (or should do).
â€œBefore answering that question, it is important to note that no architecture is a solution. Often people confuse a solution, such as corporate infrastructure, as the architecture. This is an all too common mistake and very misleading. An architecture guided the development of the infrastructure, the infrastructure is a solution â€“ not the architecture.
â€œThe architectâ€™s role isnâ€™t to create solutions. Rather the architectâ€™s role is to inform decision-makers and guide development of solutions based on understanding business drivers and needs of the organization. Although a single person can have both a role as an architect and a developer. The architect typically takes a broader and material independent view than the developer, yet leaves much downstream detail to the development community.
â€œSo, since architecture is not a solution what is it? It is a package of information that describes what is needed to achieve a given result and what it might look like in a future state if implemented. In order for an architecture to be effective, that is for it to be realized in solutions, it must guide decisions.
â€œAny good architecture addresses a well-defined scope and seeks to achieve specified goals. For example, an architecture for a back-office software suite will seek to enable improvements to back office operations, an architecture for a department network will enable department interconnectivity, an architecture for corporate infrastructure will address needed services throughout at lower costs, etc. For each scope there are decision-makers that can either accept or reject the guidance from the architect such as office managers, network managers, the head of IT, etc.
â€œThose that deal with Enterprise Architecture take the broadest view, deal with issues that are oftentimes beyond even the corporate level, and are most effective when they influence corporate or Board level decision-makers.
This post is a refreshing counterpoint to the breathless ‘AI will take over everything’ reporting that is increasingly common of late.
The first area is that â€œwe wonâ€™t be riding in self-driving carsâ€. As Dr. Reddy explains: â€œWhile many are predicting a driverless future, weâ€™re a long ‘road’ away from autonomous vehicles.” This is is terms of cars that will take commuters to work, a situation where the commuter can sit back and read his or her iPad while paying little attention to the traffic outside.
He adds: â€œFor a number of years ahead, human operators and oversight will still rule the roads, because the discrete human judgments that are essential while driving will still require a person with all of his or her faculties â€” and the attendant liability for when mistakes happen. Besides technical challenges, humans tend to be more forgiving about mistakes made by human intelligence as opposed to those made by artificial intelligence.â€
The second â€˜unpredictionâ€™ is that people will not be replaced by AI bots this year. Dr. Reddy states: â€œWhile it is possible that artificial intelligence agents might replace (but more likely supplement) certain administrative tasks, the reality is that worker displacement by AI is over-hyped and unlikely.â€ So robots won’t be taking over most jobs any time soon.
This is because, the analyst states: â€œEven in an environment where Automated Machine Learning is helping machines to build machines through deep learning, the really complex aspects of jobs will not be replaced. Thus, while AI will help automate various tasks that mostly we donâ€™t want to do anyway, weâ€™ll still need the human knowledge workers for thinking, judgment and creativity. But, routine tasks beware: AI is coming for you!â€
The third aspect is that we wonâ€™t get AI-powered medical diagnoses. This is, Dr. Reddy says â€œDue to a lack of training data and continued challenges around learning diagnosis and prognosis decision-making through identifying patterns, AI algorithms are not very good at medical decision automation and will only be used on a limited basis to support but not replace diagnosis and treatment recommendations by humans.â€
He adds: â€œAI will be increasingly deployed against sporadic research needs in the medical arena, but, as with fraud detection, pattern recognition by machines only goes so far, and human insight, ingenuity and judgment come into play. People are still better than machines at learning patterns and developing intuition about new approaches.â€
Importantly: “People are still better than machines at learning patterns and developing intuition about new approaches.â€
AI at work
The fourth and final area is that we will still struggle with determining where artificial intelligence should be deployed. Dr. Reddy states: â€œDespite what you might be hearing from AI solution vendors, businesses that want to adopt AI must first conduct a careful needs assessment. As part of this process, companies also must gain a realistic view of what benefits are being sought and how AI can be strategically deployed for maximum benefit.â€
The analyst adds: â€œIT management, business users and developers should avoid being overly ambitious and carefully assess the infrastructure and data required to drive value from AI. Best practices and ‘buy versus build’ analysis also should be part of the conversations about implementing AI applications.â€
This is an excellent TEDTalk on what is missing from bigdata (hint: it is the human element).
Why do so many companies make bad decisions, even with access to unprecedented amounts of data? With stories from Nokia to Netflix to the oracles of ancient Greece, Tricia Wang demystifies big data and identifies its pitfalls, suggesting that we focus instead on “thick data” — precious, unquantifiable insights from actual people — to make the right business decisions and thrive in the unknown.
An interesting (but not too surprising) stat from the intro is that 73% of all bigdata projects deliver no value.
Really? Someone had to write a 500 word ‘article‘ about what should be common sense?
There’s a solution, though: Kill your notifications. Yes, really. Turn them all off. (You can leave on phone calls and text messages, if you must, but nothing else.) You’ll discover that you don’t miss the stream of cards filling your lockscreen, because they never existed for your benefit. They’re for brands and developers, methods by which thirsty growth hackers can grab your attention anytime they want. Allowing an app to send you push notifications is like allowing a store clerk to grab you by the ear and drag you into their store. You’re letting someone insert a commercial into your life anytime they want. Time to turn it off.
Not one of the stronger PopSci articles I have seen. The ‘forensics’ consist mainly of ‘look closely at the image and think about it’. Em, ok.
Oh, and, warning, there is about 20x more auto-start ads and video on the linked page than there is actual useful content.
It seems like ‘free’ access to large datasets is the new giveaway/razor in the hopes that revenue will be generated by usage of the AI/analytics tools/razor blades that are co-hosted with the datasets.
This is hilarious because it is true. I’ve seen so many ‘startups’ spend a huge amount of money and effort trying to imitate the trappings of a startup rather than having original ideas and actually producing something. Here is a sample (more at the link above):
Hello, and welcome to our startup. We hope youâ€™re enjoying your complimentary snifter of vaporized coconut water. Once youâ€™re done, please place the glass into one of the blue receptacles around the office, which will send the glass to be washed and dried. Do not place it into one of the red receptacles. The red receptacles take whatever you put inside of them and launch it into space.
If you look to your left, youâ€™ll see one of our employees using a state-of-the-art ergonomic sleeping desk. Most startups have standing desks, but we have sleeping desks, dancing desks, and even skateboarding desks. The skateboarding desks are just large skateboards you can use to skate around the office. Be careful while skating, though, because we donâ€™t offer any sort of medical insurance, since our benefits budget all goes toward cool desks.