I knew that flickr has been on the decline for a while. IMHO, Yahoo’s acquisition was the beginning of the end. SmugMug’s heavy handed idiocy of late was the last straw for me.
After a few arrogant email demands from SmarmMug, I had had enough so I requested all of my data from flickr and it only took them a week and a half to provide the requested files. I happily downloaded my content and deleted my account after 13 years of use.
There is a well worn axiom in business that ‘data should be treated as a corporate asset’. This is, of course, very true and the advances in data science and ‘big data’ are giving the potential for that data to become even more valuable.
This got me thinking about how personal data should be thought about in the same way. Think about all the data generated from what you watch, what you listen to, where you visit, what you review, data from wearables, etc. All of this data is consumed and analyzed by 3rd parties currently, but what if individuals were able to take control of, what is, after all, their data.
Would this give rise to data science companies marketing algorithms directly to consumers (much like pharmaceutical companies market drugs directly)? Could it also give rise to the equivalent ‘data quackery’ similar to the natural supplements and homeopathic industry? That is, junk algorithms that, at their most benign, do no harm and at their worst incent you to dangerous courses of action?
Would there also be a new industry for ‘personal data scientists’ (like financial councilors or tax advisers) that would help you assess all of the data assets you have and how to best combine or leverage them with third parties to your best benefit (and not just the benefit of 3rd parties)? Wouldn’t it be great to have some control over the hundreds of arbitrage-like transactions that go on behind the scenes when you are waiting for a page to load on a commercial web site via browser setting that allow you to control what information about you gets shared (and with companies).
Microservices Need Architects – An excellent article on the complexity of something with ‘micro’ in it’s name. And, yes, I know and I am here to help with over a decade of experience in service design and enterprise integration skills.
For the past two years, microservices have been taking the software development world by storm. Their use has been popularized by organizations adopting Agile Software Development, continuous delivery and DevOps, as a logical next step in the progression to remove bottlenecks that slow down software delivery. As a result, much of the public discussion on microservices is coming from software developers who feel liberated by the chance to code without the constraints and dependencies of a monolithic application structure. While this “inside the microservice” perspective is important and compelling for the developer community, there are a number of other important areas of microservice architecture that aren’t getting enough attention.
Specifically, as the number of microservices in an organization grows linearly, this new collection of services forms an unbounded system whose complexity threatens to increase exponentially. This complexity introduces problems with security, visibility, testability, and service discoverability. However, many developers currently treat these as “operational issues” and leave them for someone else to fix downstream. If addressed up front—when the software system is being designed—these aspects can be handled more effectively. Likewise, although there is discussion on techniques to define service boundaries and on the linkage between organizational structure and software composition, these areas can also benefit from an architectural approach. So, where are the architects?
Much has been written in the last decade about the ‘Consumerization of Corporate’ IT with the primary example being corporate users wanting to use their smartphones and tablets from home in a corporate ecosystem.
I would argue that the inverse of that trend has started in the last few years. That is concerns that were once firmly in the corporate space are starting to bleed into the consumer space. These include:
a focus on security for personal devices with more emphasis on firewalls, encryption, SSL, password strength and even two-factor authentication.
a growing interest and need for analytic and visualization tools for the growing amount of data from wearables and other in-home devices. Currently this is served by one off tools from each vendor with more platforms emerging that are corporate-style integration platforms that take in data from disparate systems and provide a more unified ‘dashboard’ view to consumers.
additional emphasis on in-home automation and monitoring control systems for everything from thermostats, lighting, locks, motion sensors, flow sensors. Previously, this was the realm of building security groups and manufacturing plants. Automation and monitoring is also driving the previously mention areas of security and analysis.
Canary.is promote their product as an alternative to a real home security system. Nothing could be further from the reality. Here is the simple truth:
if the power goes out, you will get robbed (no battery backup, probably wouldn’t make a difference even if it did, because)
if your internet connection goes out, you will get robbed (more on this later)
if your internet upload connection experiences any slowness, you will get robbed
NONE of these things is true with a real home security system.
You must understand that this is basically just a dumb camera unit that requires an internet connection to do anything. There is no local storage or functionality in the unit itself which means if your internet connection is out or slow, the Canary is absolutely useless. As a consequence, it is constantly trying to upload video for analysis (motion detection) — it can do noting on its own. Make sure you set it to ‘privacy’ mode when you are home to cut down on it hammering your wifi.
I had one unit and it sort of worked, I added a second one and BOTH of them stopped working. The more units you add, the more of your upload bandwidth they suck up (and suck they do).
Sadly, tech support is basically useless. At some point they will have you run a test on speedtest.net and if you EVER tell them you experienced an upload speed of less than 1Mbps, then, game over, that is THE problem and apparently the end of their sorry support script. It seems their ‘engineers’ are unfamiliar with data compression, efficient data streaming and error handling algorithms, etc — if you are .01 under 1Mbps (my case), then you are screwed, they won’t support their product (or allow you to return it because it says 1Mbps on the web site). It doesn’t matter if you can facetime or run google hangouts without any glitching, ‘the problem’ is your bandwidth, not their dubious implementation.
Seriously, save your money and/or look for alternatives. This canary is dead in the coal mine.