Lots to chew on in this article.
I think the real challenge is in letting corporations set their own ‘standards’ you end up with a bunch of standards that only suit that particular corporation.
Lots to chew on in this article.
I think the real challenge is in letting corporations set their own ‘standards’ you end up with a bunch of standards that only suit that particular corporation.
BMW recently made the misguided decision to charge a subscription for heated seats in some markets. Aside from the fact that the average BMW is a maintenance disaster and is nearly undriveable by its 3rd year, what is the point of charging a subscription for a fairly standard car feature? Simple greed.
Consider also that every one of those vehicles has the added expense of being fitted with the heating elements and controls that may not be enabled. That seems like wasted money unless the consumer is actually paying for the feature twice: once when it is priced into the cost of manufacturing the car and *again* when the consumer actually tries to use it.
The other problem with these feature flags hidden in your car is that not only BMW can have access to them. It is not a great leap to imagine a new form of ransomware where you have to pay BMW and then some hacker to re-re-re-enable a feature on your vehicle.
This feels like an extension of ethics, in general, not being part of the curriculum in education.
Anacondaâ€™s survey of data scientists from more than 100 countries foundÂ the ethics gap extends from academia to industry. While organizations can mitigate the problem through fairness tools and explainability solutions, neither appears to be gaining mass adoption.
Only 15% of respondents said their organization has implemented a fairness system, and just 19% reported they have an explainability tool in place.
The study authors warned that this could have far-reaching consequences:
Above and beyond the ethical concerns at play, a failure to proactively address these areas poses strategic risk to enterprises and institutions across competitive, financial, and even legal dimensions.
The survey also revealed concerns around the security of open-source tools and business training, and data drudgery.Â But itâ€™s the disregard of ethics that most troubled the study authors:
Of all the trends identified in our study, we find the slow progress to address bias and fairness, and to make machine learning explainable the most concerning. While these two issues are distinct, they are interrelated, and both pose important questions for society, industry, and academia.
While businesses and academics are increasingly talking about AI ethics, their words mean little if they donâ€™t turn into actions.
I think these lessons have application beyond the Cloud and AWS
Feels a bit overstated, but an interesting read on AutoML and its potential impacts on Data Science (and scientists)
Thereâ€™s a good reason for all the AutoML hype: AutoML is aÂ must-haveÂ for many organizations.
Letâ€™s take the example of Salesforce. TheyÂ explainÂ that their â€œcustomers are looking to predict a host of outcomes â€” from customer churn, sales forecasts and lead conversions to email marketing click throughs, website purchases, offer acceptances, equipment failures, late payments, and much more.â€
In short, ML is ubiquitous. However, for ML to be effective for each unique customer, they would â€œhave to build and deploy thousands of personalized machine learning models trained on each individual customerâ€™s data for every single use caseâ€ and â€œthe only way to achieve this without hiring an army of data scientists is through automation.â€
While many people see AutoML as a way to bring ease-of-use and efficiency to ML, the reality is that for many enterprise applications, thereâ€™s just no other way to do it. A company like Facebook or Salesforce or Google canâ€™t hire data scientists to build custom models for each of their billions of users, so they automate ML instead, enabling unique models at scale.
The amount of ML components that are automated depends on the platform, but with Salesforce, it includes feature inference, automated feature engineering, automated feature validation, automated model selection, and hyperparameter optimization.
Thatâ€™s a mouthful.
What this means is that data scientists can deploy thousands of models in production, with far less grunt work and hand-tuning, reducing turn-around-time drastically.
By shifting the work from data crunching towards more meaningful analytics, AutoML enables more creative, business-focused applications of data science.
A three-year effort by hundreds of engineers worldwide resulted in the publication in March of 2019 of Ethically Aligned Design (EAD) for Business, a guide for policymakers, engineers, designers, developers and corporations. The effort was headed by the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (A/IS), with John C. Havens as Executive Director, who spoke to AI Trends for an Executive Interview. We recently connected to ask how the effort has been going. Here is an update.
EAD First Edition, a 290-page document which Havens refers to as â€œapplied ethics,â€ has seen some uptake, for example by IBM, which referred to the IEEE effort within their own resource calledÂ Everyday Ethics for AIÂ Â The IBM document is 26 pages, easy to digest, structured into five areas of focus, each with recommended action steps and an example. The example for Accountability involved an AI team developing applications for a hotel. Among the recommendations was: enable guests to turn the AI off, conduct face-to-face interviews to help develop requirements; and, institute a feedback learning loop.
TheÂ OECDÂ (Organization for Economic Cooperation and Development) issued a paper after the release of an earlier version of EAD attesting to the close affinity between the IEEEâ€™s work and the OECD Principles on AI. The OECD cited as shared values â€œthe need for such systems to primarily serve human well-being through inclusive and sustainable growth; to respect human-centered values and fairness; and to be robust, safe and dependable, including through transparency, explainability and accountability.â€
Teaching algorithms to create novel algorithms…
Artificial intelligence (AI) is evolvingâ€”literally. Researchers have created software that borrows concepts from Darwinian evolution, including â€œsurvival of the fittest,â€ to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI.
â€œWhile most people were taking baby steps, they took a giant leap into the unknown,â€ says Risto Miikkulainen, a computer scientist at the University of Texas, Austin, who was not involved with the work. â€œThis is one of those papers that could launch a lot of future research.â€
Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons carry out specific tasksâ€”for instance spotting road signsâ€”and researchers can spend months working out how to connect them so they work together seamlessly.
In recent years, scientists have sped up the process byÂ automating some steps. But these programs still rely on stitching together ready-made circuits designed by humans. That means the output is still limited by engineersâ€™ imaginations and their existing biases.
So Quoc Le, a computer scientist at Google, and colleagues developed a program called AutoML-Zero that could develop AI programs with effectively zero human input, using only basic mathematical concepts a high school student would know. â€œOur ultimate goal is to actually develop novel machine learning concepts that even researchers could not find,â€ he says.
A post on efforts to further bolster AI transparency and fairness by the AI World Society.
Learning algorithms find patterns in data they are given. However, in the processes by which the data is collected, relevant variables are defined and hypotheses are formulated that may depend on structural unfairness found in society, the paper suggests.
â€œAlgorithms based on such data could introduce or perpetuate a variety of discriminatory biases, thereby maintaining a cycle of injustice,â€ the authors state. â€œThe community within statistics and machine learning that works on issues of fairness in data analysis have taken a variety of approaches to defining fairness formally, with the aim of ultimately ensuring that learning algorithms are fair.â€
The paper poses some tough questions. For instance, â€œSince, unsurprisingly, learning algorithms that use unfair data can lead to biased or unfair conclusions, two questions immediately suggest themselves. First, what does it mean for a world and data that comes from this world to be fair? And second, if data is indeed unfair, what adjustments must be made to learning algorithms that use this data as input to produce fairer outputs?â€
Cause and effect is a challenging area of statistics; correlation does not imply causation, the experts say. Teasing out causality often involved obtaining data in a carefully controlled way. An early example is the work done by James Lindt for the Royal Navy, when scurvy among sailors was a health crisis. Lindt organized what later came to be viewed as one of the first instances of a clinical trial. He arranged 12 sailors into six pairs, and gave each pair one of six scurvy treatments thought at the time to be effective. Of the treatments, only citrus was effective. That led to citrus products being issued on all Royal Navy ships.
Whether fairness can be defined by computer scientists and engineers is an open question. â€œIssues of fairness and justice have occupied the ethical, legal, and political literature for centuries. While many general principles are known, such as fairness-as-proportionality, just compensation, and social equality, general definitions have proven elusive,â€ the paper states.
Moreover, â€œIndeed, a general definition may not be possible since notions of fairness are ultimately rooted in either ethical principle or ethical intuition, and both principles and intuitions may conflict.â€
Mediation analysis is one approach to making algorithms more fair. Needless to say, the work is continuing.
Interesting post on how ‘magical experiences’ fueled by AI and machine learning will change how products are designed and used.
There is growing momentum demonstrated by technical progress and ecosystem development. One of the leading startups that are working on helping engineers take advantage of TinyML by automating data collection, training, testing, and deployment, isÂ Edge Impulse. Starting with embedded or IoT devices, Edge Impulse is offering developers the tools and guidance to collect data straight from edge devices, build a model that can detect â€œbehaviorâ€, discern right from wrong, noise from signal, so they can actually make sense of what happens in the real world, across billions of devices, in every place, and everything. By deploying the Edge Impulse model as part of everyoneâ€™s firmware, you create the biggest neural network on earth. Effectively, Edge Impulse gives brains to your previously passive devices so you can build better a product with neural personality.
Another interesting company isÂ Syntiant, whoâ€™s building a new processor for deep learning, dramatically different from traditional computing methods. By focusing on memory access and parallel processing, their Neural Decision Processors operate at efficiency levels that are orders of magnitude higher than any other technology. The company claims its processors can make devices approximately 200x more efficient by providing 20x the throughput over current low-power MCU solutions, and subsequently, enabling larger networks at significantly lower power. The result? Voice interfaces that allow a far richer and more reliable user experience, otherwise known as â€œWowâ€ and â€œHow did it do that?â€
An interesting article on business challenges with artificial intelligence.
Artificial intelligence (AI) technology continues to advance by leaps and bounds and is quickly becoming a potential disrupter and essential enabler for nearly every company in every industry. At this stage, one of the barriers to widespread AI deployment is no longer the technology itself; rather, itâ€™s a set of challenges that ironically are far more human: ethics, governance, and human values.
As AI expands into almost every aspect of modern life, the risks of misbehaving AI increase exponentiallyâ€”to a point where those risks can literally become a matter of life and death. Real-world examples of AI gone awry include systems that discriminate against people based on their race, age, or gender and social media systems that inadvertently spread rumors and disinformation and more.
Even worse, these examples are just the tip of the iceberg. As AI is deployed on a larger scale, the associated risks will likely only increaseâ€”potentially having serious consequences for society at large, and even greater consequences for the companies responsible. From a business perspective, these potential consequences include everything from lawsuits, regulatory fines, and angry customers to embarrassment, reputation damage, and destruction of shareholder value.
Yet with AI now becoming a required business capabilityâ€”not just a â€œnice to haveâ€â€”companies no longer have the option to avoid AIâ€™s unique risks simply by avoiding AI altogether. Instead, they must learn how to identify and manage AI risks effectively. In order to achieve the potential of human and machine collaboration, organizations need to communicate a plan for AI that is adopted and spoken from the mailroom to the boardroom. By having an ethical framework in place, organizations create a common language by which to articulate trust and help ensure integrity of data among all of their internal and external stakeholders. Having a common framework and lens to apply the governance and management of risks associated with AI consistently across the enterprise can enable faster, and more consistent adoption of AI.
Some thoughts from the creator of an open source machine learning platform focused on developers.
Machine learning has, historically, been the purview of data science teams. This makes it a bit counter-intuitive that we builtÂ Cortex, our open source ML infrastructure platform, primarily for software engineers.
Going all the way back to machine learningâ€™s roots in the 1950s, the field has historically been research-focusedâ€”things like Arthur Samuelâ€™s checkers-playing AI (1959) or IBMâ€™s chess-playing Deep Blue (1988).
Starting around 2010, there was a renewed interest in deep learning, with major tech companies releasing breakthroughs. Projects like Google Brain, DeepMind, and OpenAI (among others) began publishing new, state-of-the-art results.
These breakthroughs manifested as features in big companiesâ€™ products:
In addition, this renewed focus on machine learningâ€”and particularly deep learningâ€”lead to the creation of better tools and frameworks, like Googleâ€™s TensorFlow and Facebookâ€™s PyTorch, as well as open source models and datasets, like OpenAIâ€™s GPT-2 and ImageNet.
With better tools, open source models, and accessible data, it became possible for small teams to train models for production. As a consequence of this democratization, a wave of new products have emerged, all of which at their core are â€œjustâ€ ML models wrapped in software. We refer to these products as ML-native.
This article briefly outlines how as Machine Learning (ML) becomes more a part of corporate solutions, the need for MLOps is going to become more critical.
The termÂ MLOpsÂ refers to a set of techniques and practises for data scientists to collaborate operations professionals.. MLOps aims to manage deployment of machine learning and deep learning models in large-scale production environments.
The termÂ DevOpsÂ comes from the software engineering world and is concerned with developing and operating large-scale software systems.Â DevOps introduces two concepts:Â Continuous Integration (CI) and Continuous Delivery (CD).Â DevOps aims to shorten development cycles, increase deployment velocity and create dependable releases.
A BBC post speculating on whether there is a cooling off coming for AI
The last decade was a big one for artificial intelligence but researchers in the field believe that the industry is about to enter a new phase.
Hype surrounding AI has peaked and troughed over the years as the abilities of the technology get overestimated and then re-evaluated.
The peaks are known as AI summers, and the troughs AI winters.
The 10s were arguably the hottest AI summer on record with tech giants repeatedly touting AI’s abilities.
AI pioneer Yoshua Bengio, sometimes called one of the “godfathers of AI”, told the BBC that AI’s abilities were somewhat overhyped in the 10s by certain companies with an interest in doing so.
There are signs, however, that the hype might be about to start cooling off.
This post explores while tinyML may be the next big thing.
A coalescence of several trends has made the microcontroller not just a conduit for implementing IoT applications but also a powerful, independent processing mechanism in its own right. In recent years, hardware advancements have made it possible for microcontrollers to perform calculations much faster.Â Improved hardware coupled with more efficient development standards have made itÂ easier for developersÂ toÂ build programs on these devices. Perhaps the most important trend, though, has been the rise of tiny machine learning, or TinyML. Itâ€™s a technology weâ€™ve been following since investing in a startup in this space.
TinyML broadly encapsulates the field of machine learning technologies capable of performing on-device analytics of sensor data at extremely low power. Between hardware advancements and the TinyML communityâ€™s recent innovations in machine learning, it is now possible to run increasingly complexÂ deep learningÂ models (the foundation of most modern artificial intelligence applications) directly on microcontrollers. A quick glance under the hood shows this is fundamentally possible because deep learning models are compute-bound, meaning their efficiency is limited by the time it takes to complete a large number of arithmetic operations. Advancements in TinyML have made it possible toÂ run these models on existing microcontroller hardware.
In other words, those 250 billion microcontrollers in our printers, TVs, cars, and pacemakers can now perform tasks that previously only our computers and smartphones could handle. All of our devices and appliances are getting smarter thanks to microcontrollers.
TinyML represents a collaborative effort between the embedded ultra-low power systems and machine learning communities, which traditionally have operated largely independently. This union has opened the floodgates for new and exciting applications of on-device machine learning. However, the knowledge that deep learning and microcontrollers are a perfect match has been pretty exclusive, hidden behind the walls of tech giants like Google and Apple. This becomes more obvious when you learn that this paradigm of running modified deep learning models on microcontrollers is responsible for the â€œOkay Googleâ€ and â€œHey Siri,â€ functionality that has been around for years.
But why is it important that we be able to run these models on microcontrollers? Much of the sensor data generated today is discarded because of cost, bandwidth, or power constraints â€“ or sometimes a combination of all three. For example, take an imagery micro-satellite. Such satellites are equipped with cameras capable of capturing high resolution images but are limited by the size and number of photos they can store and how often they can transmit those photos to Earth. As a result, such satellites have to store images at low resolution and at a low frame rate. What if we could use image detection models to save high resolution photos only if an object of interest (like a ship or weather pattern) was present in the image? While the computing resources on these micro-satellites have historically been too small to support image detection deep learning models, TinyML now makes this possible.
or Artificial Intelligence Bull Shitake
There are a lot of claims being made, and as this article points out, not many of them are supported by strong evidence/math.
In Rebooting AI, Ernie Davis and I made six recommendations, each geared towards how readers – and journalists â€“ and researchers might equally assess each new result that they achieve, asking the same set of questions in a limit section in the discussion of their papers:
Stripping away the rhetoric, what does the AI system actually do? Does a â€œreading systemâ€ really read?
How general is the result? (Could a driving system that works in Phoenix work as well in Mumbai? Would a Rubikâ€™s cube system work in opening bottles? How much retraining would be required?)
Is there a demo where interested readers can probe for themselves?
If AI system is allegedly better than humans, then which humans, and how much better? (A comparison is low wage workers with little incentive to do well may not truly probe the limits of human ability)
How far does succeeding at the particular task actually take us toward building genuine AI?
How robust is the system? Could it work just as well with other data sets, without massive amounts of retraining? AlphaGo works fine on a 19×19 board, but would need to be retrained to play on a rectangular board; the lack of transfer is telling.
I really need to find the time to build this DIY speed cam. From my home office window, I have an excellent view of an intersection where I would estimate about 70% of the cars don’t even stop at the posted Stop sign. Further, I would guess that close to 90% of them are going faster than the 25 MPH speed limit. Data is good.
Computer vision itself isnâ€™t anything new, but it has only recently reached a point where itâ€™s practical for hobbyists to utilize. Part of that is because hardware has improved dramatically in recent years, but it also helps that good open-source machine learning and computer vision software has become available. More software options are becoming available, but OpenCV is one that has been around for a while now and is still one of the most popular. Over on PyImageSearch,Â Adrian Rosebrock has put together a tutorialÂ that will walk you through how to detect vehicles and then track them to estimate the speed at which theyâ€™re traveling.
Rosebrockâ€™s guide will show you how to make your very own DIY speed camera. But even if that isnâ€™t something you have a need for, the tutorial is worth following just to learn some useful computer vision techniques. You could, for instance, modify this setup to count how manyÂ carsÂ enter and exit a parking lot. This can be done with affordable and readily-available hardware, so the barrier to entry is low â€” perfect for the kind of project that is more of a learning experience than anything else.
As more and more business decisions get handed over (sometime blindly) to computer algorithms (aka ‘AI’), companies are very late to the game in considering what the consequences of that delegation will yield. As a buffer against these consequences, a company may want to be more transparent about how it’s algorithms work but that is not without it’s challenges.
To start, companies attempting to utilize artificial intelligence need to recognize that there are costs associated with transparency. This is not, of course, to suggest that transparency isnâ€™t worth achieving, simply that it also poses downsides that need to be fully understood. These costs should be incorporated into a broader risk model that governs how to engage with explainable models and the extent to which
information about the model is available to others.
Second, organizations must also recognize that security is becoming an increasing concern in the world of AI. As AI is adopted more widely, more security vulnerabilities and bugs will surely be discovered, as my colleagues and I at the Future of Privacy Forum recently argued. Indeed, security may be one of the biggest long-term barriers to the adoption of AI.
Some Pi-based nerdiness/extreme creativity.
Interesting background on some interesting (and at times annoying) buzzwords in 2019.
So we have gone from conflict diamonds to conflict avocados? I guess it proves out the old adage “if something is worth doing, it is worth doing for money”. Sadly.