Lots to chew on in this article.
I think the real challenge is in letting corporations set their own ‘standards’ you end up with a bunch of standards that only suit that particular corporation.
The same old thing with a new twist (aka my personal infocloud)
Lots to chew on in this article.
I think the real challenge is in letting corporations set their own ‘standards’ you end up with a bunch of standards that only suit that particular corporation.
This is a bit of a departure from the tech-oriented articles that I usually post but I thought I would just “put this out there” because I couldn’t find any relevant info when I had this issue with this product.
Basically, the Jotul GV 370 DV is a gas stove (the kind you use to heat a room, not to cook on). It was installed and minimally tested for gas flow. The problem was, it wouldn’t stay lit when we tried to use it for the first time. The pattern was, the pilot would light, 30-90 seconds would go by, the stove would flash combust (the gas in the chamber would ignite) rather forcefully but not stay on. This cycle would continue until a red LED on the front of the control unit started flashing. At this point, I turned off the stove and turned of the gas and started looking for answers.
I read and re-read the installation docs that came with the Jotul GV 370 DV. One thing I noticed is that the damper setting on it was not correct for the amount of vent pipe that was installed. That was an easy fix. Unfortunately, it didn’t solve the problem (or change it at all). Internet searches didn’t reveal any additional useful information, just a couple of edge cases and people arguing philosophy rather than practice solutions.
I even tried the justAnswers web site. Paid $5 for a ‘trial membership’ and was connected to an absolutely useless ‘expert’ who just tried to read me the online posts I had found via google. His final bit of ‘expert advice’ was to get a voltage meter, disassemble the stove and tell him what all the voltage readings were on all the stove components. Absolutely pointless exercise. I thanked him for wasting my time and requested a refund from justAnswers.
At this point, I elected to take the glass off the front of the unit and inspect the burning media bed to make sure the gas jets weren’t blocked or obstructed. This is when I noticed that the gas bed was out of alignment with the pilot starter. I removed the gas bed tray, re-seated it so that the notch for the pilot was centered on the pilot (instead of all the way to the right like it was when i originally opened it up). After this adjustment, I carefully placed the gas bed media back on the pan, reassembled the glass and turned the gas back on.
When I turned on the stove, the pilot came on, 30 seconds later the perimeter of the bed lit, went out, re-lit, then stayed on. Success! All that because of a one centimeter misalignment of the gas bed with the pilot.
So there you have it. Hopefully this will help someone else who has this issue quickly solve the problem without delay or ‘expert’ help.
This article reinforces what I have been saying for years: microservices are a big mistake, especially for developers who don’t understand distributed systems, high availability and observability. To be successful, they must be properly designed and implemented, unlike most of the copy-and-paste, we-don’t-need-no-stinkin-design development that is seen today.
From the article:
We engineers have an affliction. It’s called “wanting to use the latest tech because it sounds cool, even though it’s technically more difficult.†Got that from the doctor’s office, it’s 100% legit. The diagnosis was written on my prescription for an over-the-counter monolith handbook. From 2004. Seriously though, we do this all the time. Every time something cool happens, we flock to it like moths to a campfire. And more often than not, we get burned.
This feels like an extension of ethics, in general, not being part of the curriculum in education.
Anaconda’s survey of data scientists from more than 100 countries found the ethics gap extends from academia to industry. While organizations can mitigate the problem through fairness tools and explainability solutions, neither appears to be gaining mass adoption.
Only 15% of respondents said their organization has implemented a fairness system, and just 19% reported they have an explainability tool in place.
The study authors warned that this could have far-reaching consequences:
Above and beyond the ethical concerns at play, a failure to proactively address these areas poses strategic risk to enterprises and institutions across competitive, financial, and even legal dimensions.
The survey also revealed concerns around the security of open-source tools and business training, and data drudgery. But it’s the disregard of ethics that most troubled the study authors:
Of all the trends identified in our study, we find the slow progress to address bias and fairness, and to make machine learning explainable the most concerning. While these two issues are distinct, they are interrelated, and both pose important questions for society, industry, and academia.
While businesses and academics are increasingly talking about AI ethics, their words mean little if they don’t turn into actions.
I think these lessons have application beyond the Cloud and AWS
Teaching algorithms to create novel algorithms…
Artificial intelligence (AI) is evolving—literally. Researchers have created software that borrows concepts from Darwinian evolution, including “survival of the fittest,†to build AI programs that improve generation after generation without human input. The program replicated decades of AI research in a matter of days, and its designers think that one day, it could discover new approaches to AI.
“While most people were taking baby steps, they took a giant leap into the unknown,†says Risto Miikkulainen, a computer scientist at the University of Texas, Austin, who was not involved with the work. “This is one of those papers that could launch a lot of future research.â€
Building an AI algorithm takes time. Take neural networks, a common type of machine learning used for translating languages and driving cars. These networks loosely mimic the structure of the brain and learn from training data by altering the strength of connections between artificial neurons. Smaller subcircuits of neurons carry out specific tasks—for instance spotting road signs—and researchers can spend months working out how to connect them so they work together seamlessly.
In recent years, scientists have sped up the process by automating some steps. But these programs still rely on stitching together ready-made circuits designed by humans. That means the output is still limited by engineers’ imaginations and their existing biases.
So Quoc Le, a computer scientist at Google, and colleagues developed a program called AutoML-Zero that could develop AI programs with effectively zero human input, using only basic mathematical concepts a high school student would know. “Our ultimate goal is to actually develop novel machine learning concepts that even researchers could not find,†he says.
A post on efforts to further bolster AI transparency and fairness by the AI World Society.
Learning algorithms find patterns in data they are given. However, in the processes by which the data is collected, relevant variables are defined and hypotheses are formulated that may depend on structural unfairness found in society, the paper suggests.
“Algorithms based on such data could introduce or perpetuate a variety of discriminatory biases, thereby maintaining a cycle of injustice,†the authors state. “The community within statistics and machine learning that works on issues of fairness in data analysis have taken a variety of approaches to defining fairness formally, with the aim of ultimately ensuring that learning algorithms are fair.â€
The paper poses some tough questions. For instance, “Since, unsurprisingly, learning algorithms that use unfair data can lead to biased or unfair conclusions, two questions immediately suggest themselves. First, what does it mean for a world and data that comes from this world to be fair? And second, if data is indeed unfair, what adjustments must be made to learning algorithms that use this data as input to produce fairer outputs?â€
Cause and effect is a challenging area of statistics; correlation does not imply causation, the experts say. Teasing out causality often involved obtaining data in a carefully controlled way. An early example is the work done by James Lindt for the Royal Navy, when scurvy among sailors was a health crisis. Lindt organized what later came to be viewed as one of the first instances of a clinical trial. He arranged 12 sailors into six pairs, and gave each pair one of six scurvy treatments thought at the time to be effective. Of the treatments, only citrus was effective. That led to citrus products being issued on all Royal Navy ships.
Whether fairness can be defined by computer scientists and engineers is an open question. “Issues of fairness and justice have occupied the ethical, legal, and political literature for centuries. While many general principles are known, such as fairness-as-proportionality, just compensation, and social equality, general definitions have proven elusive,†the paper states.
Moreover, “Indeed, a general definition may not be possible since notions of fairness are ultimately rooted in either ethical principle or ethical intuition, and both principles and intuitions may conflict.â€
Mediation analysis is one approach to making algorithms more fair. Needless to say, the work is continuing.
This article briefly outlines how as Machine Learning (ML) becomes more a part of corporate solutions, the need for MLOps is going to become more critical.
The term MLOps refers to a set of techniques and practises for data scientists to collaborate operations professionals.. MLOps aims to manage deployment of machine learning and deep learning models in large-scale production environments.
The term DevOps comes from the software engineering world and is concerned with developing and operating large-scale software systems. DevOps introduces two concepts: Continuous Integration (CI) and Continuous Delivery (CD). DevOps aims to shorten development cycles, increase deployment velocity and create dependable releases.
or Artificial Intelligence Bull Shitake
There are a lot of claims being made, and as this article points out, not many of them are supported by strong evidence/math.
In Rebooting AI, Ernie Davis and I made six recommendations, each geared towards how readers – and journalists – and researchers might equally assess each new result that they achieve, asking the same set of questions in a limit section in the discussion of their papers:
Stripping away the rhetoric, what does the AI system actually do? Does a “reading system†really read?
How general is the result? (Could a driving system that works in Phoenix work as well in Mumbai? Would a Rubik’s cube system work in opening bottles? How much retraining would be required?)
Is there a demo where interested readers can probe for themselves?
If AI system is allegedly better than humans, then which humans, and how much better? (A comparison is low wage workers with little incentive to do well may not truly probe the limits of human ability)
How far does succeeding at the particular task actually take us toward building genuine AI?
How robust is the system? Could it work just as well with other data sets, without massive amounts of retraining? AlphaGo works fine on a 19×19 board, but would need to be retrained to play on a rectangular board; the lack of transfer is telling.
I really need to find the time to build this DIY speed cam. From my home office window, I have an excellent view of an intersection where I would estimate about 70% of the cars don’t even stop at the posted Stop sign. Further, I would guess that close to 90% of them are going faster than the 25 MPH speed limit. Data is good.
Computer vision itself isn’t anything new, but it has only recently reached a point where it’s practical for hobbyists to utilize. Part of that is because hardware has improved dramatically in recent years, but it also helps that good open-source machine learning and computer vision software has become available. More software options are becoming available, but OpenCV is one that has been around for a while now and is still one of the most popular. Over on PyImageSearch, Adrian Rosebrock has put together a tutorial that will walk you through how to detect vehicles and then track them to estimate the speed at which they’re traveling.
Rosebrock’s guide will show you how to make your very own DIY speed camera. But even if that isn’t something you have a need for, the tutorial is worth following just to learn some useful computer vision techniques. You could, for instance, modify this setup to count how many cars enter and exit a parking lot. This can be done with affordable and readily-available hardware, so the barrier to entry is low — perfect for the kind of project that is more of a learning experience than anything else.
As more and more business decisions get handed over (sometime blindly) to computer algorithms (aka ‘AI’), companies are very late to the game in considering what the consequences of that delegation will yield. As a buffer against these consequences, a company may want to be more transparent about how it’s algorithms work but that is not without it’s challenges.
To start, companies attempting to utilize artificial intelligence need to recognize that there are costs associated with transparency. This is not, of course, to suggest that transparency isn’t worth achieving, simply that it also poses downsides that need to be fully understood. These costs should be incorporated into a broader risk model that governs how to engage with explainable models and the extent to which
information about the model is available to others.
Second, organizations must also recognize that security is becoming an increasing concern in the world of AI. As AI is adopted more widely, more security vulnerabilities and bugs will surely be discovered, as my colleagues and I at the Future of Privacy Forum recently argued. Indeed, security may be one of the biggest long-term barriers to the adoption of AI.
Science has found that reading is essential for a healthy brain. We already know reading is good for children’s developing noggins: A study of twins at the University of California at Berkeley found that kids who started reading at an earlier age went on to perform better on certain intelligence tests, such as analyses of their vocabulary size.
Other studies show that reading continues to develop the brains of adults. One 2012 Stanford University study, where people read passages of Jane Austen while inside an MRI, indicates that different types of reading exercise different parts of your brain. As you get older, another study suggests, reading might help slow down or even halt cognitive decline.Science has found that reading is essential for a healthy brain. We already know reading is good for children’s developing noggins: A study of twins at the University of California at Berkeley found that kids who started reading at an earlier age went on to perform better on certain intelligence tests, such as analyses of their vocabulary size.https://www.popsci.com/read-more-books
Other studies show that reading continues to develop the brains of adults. One 2012 Stanford University study, where people read passages of Jane Austen while inside an MRI, indicates that different types of reading exercise different parts of your brain. As you get older, another study suggests, reading might help slow down or even halt cognitive decline.
Â
And it doesn’t seem to matter if it is a physical book, an e-reader or an audio book (although the audio book has a slightly different impact on the brain).
Â
As for audiobooks, the research so far has found that they stimulate the brain just as deeply as black-and-white pages, although they affect your gray matter somewhat differently. Because you’re listening to a story, you’re using different methods to decode and comprehend it. With print books, you need to provide the voice, called the prosody—you’re imagining the “tune and rhythm of speech,†the intonation, the stress on certain syllables, and so. With audio, the voice actor provides that information for you, so your brain isn’t generating the prosody itself, but rather working to understand the prosody in your ears.
Interesting post that suggests that in deep learning algorithms, questioning things may lead to higher quality conclusions.
Researchers at Uber and Google are working on modifications to the two most popular deep-learning frameworks that will enable them to handle probability. This will provide a way for the smartest AI programs to measure their confidence in a prediction or a decision—essentially, to know when they should doubt themselves.
Deep learning, which involves feeding example data to a large and powerful neural network, has been an enormous success over the past few years, enabling machines to recognize objects in images or transcribe speech almost perfectly. But it requires lots of training data and computing power, and it can be surprisingly brittle.
Somewhat counterintuitively, this self-doubt offers one fix. The new approach could be useful in critical scenarios involving self-driving cars and other autonomous machines.
“You would like a system that gives you a measure of how certain it is,†says Dustin Tran, who is working on this problem at Google. “If a self-driving car doesn’t know its level of uncertainty, it can make a fatal error, and that can be catastrophic.â€
‘Enterprise Architect’ is a very fashionable title these days which causes a bit of confusion (and consternation) for actual EA practitioners. Typically, this title is attached to the role of someone who has deep technical knowledge about a given technology/application/suite. This is not an Enterprise Architect.
This article does a great job of clarifying what Enterprise Architecture is and what an Architect does (or should do).
“Before answering that question, it is important to note that no architecture is a solution. Often people confuse a solution, such as corporate infrastructure, as the architecture. This is an all too common mistake and very misleading. An architecture guided the development of the infrastructure, the infrastructure is a solution – not the architecture.
“The architect’s role isn’t to create solutions. Rather the architect’s role is to inform decision-makers and guide development of solutions based on understanding business drivers and needs of the organization. Although a single person can have both a role as an architect and a developer. The architect typically takes a broader and material independent view than the developer, yet leaves much downstream detail to the development community.
“So, since architecture is not a solution what is it? It is a package of information that describes what is needed to achieve a given result and what it might look like in a future state if implemented. In order for an architecture to be effective, that is for it to be realized in solutions, it must guide decisions.
“Any good architecture addresses a well-defined scope and seeks to achieve specified goals. For example, an architecture for a back-office software suite will seek to enable improvements to back office operations, an architecture for a department network will enable department interconnectivity, an architecture for corporate infrastructure will address needed services throughout at lower costs, etc. For each scope there are decision-makers that can either accept or reject the guidance from the architect such as office managers, network managers, the head of IT, etc.
“Those that deal with Enterprise Architecture take the broadest view, deal with issues that are oftentimes beyond even the corporate level, and are most effective when they influence corporate or Board level decision-makers.
This post is a refreshing counterpoint to the breathless ‘AI will take over everything’ reporting that is increasingly common of late.
Self-driving cars
The first area is that “we won’t be riding in self-driving carsâ€. As Dr. Reddy explains: “While many are predicting a driverless future, we’re a long ‘road’ away from autonomous vehicles.” This is is terms of cars that will take commuters to work, a situation where the commuter can sit back and read his or her iPad while paying little attention to the traffic outside.
He adds: “For a number of years ahead, human operators and oversight will still rule the roads, because the discrete human judgments that are essential while driving will still require a person with all of his or her faculties — and the attendant liability for when mistakes happen. Besides technical challenges, humans tend to be more forgiving about mistakes made by human intelligence as opposed to those made by artificial intelligence.â€
Disappearing jobs
The second ‘unprediction’ is that people will not be replaced by AI bots this year. Dr. Reddy states: “While it is possible that artificial intelligence agents might replace (but more likely supplement) certain administrative tasks, the reality is that worker displacement by AI is over-hyped and unlikely.†So robots won’t be taking over most jobs any time soon.
This is because, the analyst states: “Even in an environment where Automated Machine Learning is helping machines to build machines through deep learning, the really complex aspects of jobs will not be replaced. Thus, while AI will help automate various tasks that mostly we don’t want to do anyway, we’ll still need the human knowledge workers for thinking, judgment and creativity. But, routine tasks beware: AI is coming for you!â€
Medical diagnosis
The third aspect is that we won’t get AI-powered medical diagnoses. This is, Dr. Reddy says “Due to a lack of training data and continued challenges around learning diagnosis and prognosis decision-making through identifying patterns, AI algorithms are not very good at medical decision automation and will only be used on a limited basis to support but not replace diagnosis and treatment recommendations by humans.â€
He adds: “AI will be increasingly deployed against sporadic research needs in the medical arena, but, as with fraud detection, pattern recognition by machines only goes so far, and human insight, ingenuity and judgment come into play. People are still better than machines at learning patterns and developing intuition about new approaches.â€
Importantly: “People are still better than machines at learning patterns and developing intuition about new approaches.â€
AI at work
The fourth and final area is that we will still struggle with determining where artificial intelligence should be deployed. Dr. Reddy states: “Despite what you might be hearing from AI solution vendors, businesses that want to adopt AI must first conduct a careful needs assessment. As part of this process, companies also must gain a realistic view of what benefits are being sought and how AI can be strategically deployed for maximum benefit.â€
The analyst adds: “IT management, business users and developers should avoid being overly ambitious and carefully assess the infrastructure and data required to drive value from AI. Best practices and ‘buy versus build’ analysis also should be part of the conversations about implementing AI applications.â€
This is an excellent TEDTalk on what is missing from bigdata (hint: it is the human element).
Why do so many companies make bad decisions, even with access to unprecedented amounts of data? With stories from Nokia to Netflix to the oracles of ancient Greece, Tricia Wang demystifies big data and identifies its pitfalls, suggesting that we focus instead on “thick data” — precious, unquantifiable insights from actual people — to make the right business decisions and thrive in the unknown.
An interesting (but not too surprising) stat from the intro is that 73% of all bigdata projects deliver no value.
Really? Someone had to write a 500 word ‘article‘ about what should be common sense?
There’s a solution, though: Kill your notifications. Yes, really. Turn them all off. (You can leave on phone calls and text messages, if you must, but nothing else.) You’ll discover that you don’t miss the stream of cards filling your lockscreen, because they never existed for your benefit. They’re for brands and developers, methods by which thirsty growth hackers can grab your attention anytime they want. Allowing an app to send you push notifications is like allowing a store clerk to grab you by the ear and drag you into their store. You’re letting someone insert a commercial into your life anytime they want. Time to turn it off.
It seems like ‘free’ access to large datasets is the new giveaway/razor in the hopes that revenue will be generated by usage of the AI/analytics tools/razor blades that are co-hosted with the datasets.
This is hilarious because it is true. I’ve seen so many ‘startups’ spend a huge amount of money and effort trying to imitate the trappings of a startup rather than having original ideas and actually producing something. Here is a sample (more at the link above):
Hello, and welcome to our startup. We hope you’re enjoying your complimentary snifter of vaporized coconut water. Once you’re done, please place the glass into one of the blue receptacles around the office, which will send the glass to be washed and dried. Do not place it into one of the red receptacles. The red receptacles take whatever you put inside of them and launch it into space.
If you look to your left, you’ll see one of our employees using a state-of-the-art ergonomic sleeping desk. Most startups have standing desks, but we have sleeping desks, dancing desks, and even skateboarding desks. The skateboarding desks are just large skateboards you can use to skate around the office. Be careful while skating, though, because we don’t offer any sort of medical insurance, since our benefits budget all goes toward cool desks.
This week, Tesla saw its stock drop by around 13% percent this week. Some were quick to pin this on the Volvo EV announcement, but I think that was a small part of it. I think that the bigger part was the reduced collision rating that the Tesla S received this week as well as the market’s seeming need to take down ‘tall poppies‘ like Tesla as any/ever chance they get.