AI Ethics Not Being Taught to Data Scientist

This feels like an extension of ethics, in general, not being part of the curriculum in education.

Anaconda’s survey of data scientists from more than 100 countries found the ethics gap extends from academia to industry. While organizations can mitigate the problem through fairness tools and explainability solutions, neither appears to be gaining mass adoption.

Only 15% of respondents said their organization has implemented a fairness system, and just 19% reported they have an explainability tool in place.

The study authors warned that this could have far-reaching consequences:

Above and beyond the ethical concerns at play, a failure to proactively address these areas poses strategic risk to enterprises and institutions across competitive, financial, and even legal dimensions.

The survey also revealed concerns around the security of open-source tools and business training, and data drudgery. But it’s the disregard of ethics that most troubled the study authors:

Of all the trends identified in our study, we find the slow progress to address bias and fairness, and to make machine learning explainable the most concerning. While these two issues are distinct, they are interrelated, and both pose important questions for society, industry, and academia.

While businesses and academics are increasingly talking about AI ethics, their words mean little if they don’t turn into actions.

Is AutoML the End of Data Science

Feels a bit overstated, but an interesting read on AutoML and its potential impacts on Data Science (and scientists)

There’s a good reason for all the AutoML hype: AutoML is a must-have for many organizations.

Let’s take the example of Salesforce. They explain that their “customers are looking to predict a host of outcomes — from customer churn, sales forecasts and lead conversions to email marketing click throughs, website purchases, offer acceptances, equipment failures, late payments, and much more.”

In short, ML is ubiquitous. However, for ML to be effective for each unique customer, they would “have to build and deploy thousands of personalized machine learning models trained on each individual customer’s data for every single use case” and “the only way to achieve this without hiring an army of data scientists is through automation.”

While many people see AutoML as a way to bring ease-of-use and efficiency to ML, the reality is that for many enterprise applications, there’s just no other way to do it. A company like Facebook or Salesforce or Google can’t hire data scientists to build custom models for each of their billions of users, so they automate ML instead, enabling unique models at scale.

The amount of ML components that are automated depends on the platform, but with Salesforce, it includes feature inference, automated feature engineering, automated feature validation, automated model selection, and hyperparameter optimization.

That’s a mouthful.

What this means is that data scientists can deploy thousands of models in production, with far less grunt work and hand-tuning, reducing turn-around-time drastically.

By shifting the work from data crunching towards more meaningful analytics, AutoML enables more creative, business-focused applications of data science.