Monday, 16 June 2014

Microservices and the Internet of Things - First impressions


I must say I was sceptical when I first heard the term “microservices”. It sounded like yet another wash-rinse-repeat cycle of earlier incarnations of SOA. It appears I was wrong – this architectural pattern has some  interesting characteristics that, in my opinion, offer some real potential for event-driven, edge-processing systems (that are prevalent in the Internet of Things).

After watching Fred George’s video, I realised what he described was an event-driven, agent-based, systems’ model, rather than how many of us see SOA implementations today (often way-off the original notion of a SOA). At a conceptual level, the pattern describes a ‘Complex Adaptive’ system.  Essential principles of the architecture, however, appear teasingly elegant and simple. Few of these design principles are unique to microservices, but in combination, they make a compelling story:

Publish anything of interest – don’t wait to be asked, if your microservice thinks it has some information that might be of use to the microservices ecosystem, then publish-and-be-damned.

Amplify success & attenuate failure – microservices that publish useful information thrive, while those that left unsubscribed, wither-on-the-vine. Information subscribers determine value, and value adjusts over time/changing circumstances.

Adaptive ecosystem – versions of microservices are encouraged –may-the-best-service-win mentality introduces variety which leads to evolution.

Asynchronous & encapsulated – everything is as asynchronous as possible – microservices manage their own data independently and then share it in event messages over an asynchronous publish-subscribe bus.

Think events not entities – no grand BDUF data model, just a cloud of ever-changing event messages – more like Twitter than a DBMS. Events have a “use-by-date” that indicates the freshness of data.

Events are immutable – time-series snapshots, no updates allowed.

Designed for failure – microservices must expect problems and tell the world when they encounter one and send out “I’m alive” heart-beats.

Self-organizing & self-monitoring – a self-organizing System-of-systems’ that needs no orchestration. Health monitoring and other administration features are established through a class of microservices.

Disposable Code – microservices are very, very small (typically under 1000 lines of code). They can be developed in any language.

Ultra-rapid deployment – new microservices can be written and deployed with hours with a zero-test SDLC.

It struck me that many of these design principles could apply, in part, to a 2020 Smart Grid architecture I’m working on, and to the much boarder ‘Internet of Things’ ecosystem.

The microservices pattern does seem to lend itself to the notion of highly autonomous, location-independent s/w agents that could reside at the centre, mid-point or edge of an environment. I can imagine that the fundamental simplicity of the model would help, rather than hinder, data privacy and protection by being able to include high-level system contexts, policies and protocols (e.g. encryption and redaction) applied to the event-streams. This pattern, of course, won’t be the ‘right-fit’ for all situations, but it does seem to offer interesting opportunities in:

  • Agility - very small disposable services are deployable within hours
  • Resilience - withstands service failures and supports service evolution
  • Robustness – it’s hard to break due to: simplicity, in-built failure handling and lack of centralized orchestration
It may be that the microservices pattern can only be applied to operational decision-support and behaviour profiling situations. But if that’s the case, I still see great potential in a world where many trillions of sensor-generated events will be published, consumed, filtered, aggregated, and correlated. I’m no longer a developer, but as an architect, I’m always on the look-out for patterns that could: either apply to future vendors’ products and services, or could act as a guide for in-house software development practice.

As always, I’d be keen to hear your views, examples and opinions about microservices and their potential application to the IoT. Have you come across examples of microservices pattern in an IoT context - deployed or in the labs?

I whole-heartily recommend setting aside an hour to watch the video of Fred George’s presentation on microservices:


131108 1110 Dune Fred George Recording on 2013-11-08 1106-Vimeo from ├średev Conference on Vimeo.


Post-post:
  • Another great post about microservices  - including downsides.
  • More here including "The 8 fallacies of distributed computing".


Duke Energy are doing some interesting things in the Edge Processing space.

Here's a video on microservices in the conext of IoT  (worth ignoring the references to Cloud/Azure):

http://www.microsoftvirtualacademy.com/training-courses/exploring-microservices-in-docker-and-microsoft-azure

I'd like to talk to anyone who's impelmenting/ thinking about a Staged Event Driven Architecture using microservices for Edge Processing.

Phil Wills on experience of deploying microservices at The Gaurdian

Monday, 2 June 2014

Architect or Coach?

Is it just me, or are others finding the Enterprise Architect role shifting towards ‘Coach/Facilitator’? 
These days I find I'm most attracted to Tweets about workshop facilitation and business analysis techniques rather than anything discussing Enterprise Architecture: frameworks, methods and tools. 

Here’s a few links I’d recommend for those interested in the former.


Thursday, 13 March 2014

The Internet of Things & me - still fun after all these years!

I attended Hong Kong's first 'Internet of Things' forum  a few weeks back. I have to say, I was underwhelmed by the event! It was like time travel back to 2002: 

Late in 2002, Kurt Kammerer and Tim Schideler founded a company called VI Agents. A few months later I had joined them, and together we came up with a conceptual designed for the world's first 'Sensor-as-a-Service' platform, although we didn't call it that back then. We were asked to come up with a design for British Telecom's nascent 'Auto ID' service driven by a belief that RFID tags would become the ubiquitous method of tracking all sorts of things: from sea containers to trains to cattle.

Our idea was to take a business scenario view opportunity rather than a technology one. We believed that the solution needed to be signalling-technology agnostic (i.e.not tied to RFID nor the Electronic Product Code architecture) and, as a multi-tenant web-based platform, we needed to be easy for customers to integrate with. And being a very small start-up, we wanted to leverage Open technologies and lowest possible cost software engineering expertise.  Kurt and I were the main ideas guys: Kurt drawing on his background in software agent technologies and me contributing a tracking-objects-anywhere design pattern I'd been working on since working at DHL, and had embellished, while working for Hutchison Ports.

Kurt and I first met as a result of the post-9-11 activities started by US Customs in an attempt to prevent Weapons of mass destruction arriving at the shores of the US. At that time Kurt,was still the MD of Living Systems AG in Germany who specialized in software agent-based platforms for electronic trading  and I was a member of a project team at  Hutchison Port Holdings Ltd. who were implementing RFID solutions for securing sea containers. My bosses in Hong Kong wanted me to come up with a suggestion for moneytizing the investment in RFID technologies, so I dusted-off a simple parcel track design pattern from my days at DHL and used it my initial mental canvas for something that became a concept called 'Super Track' (only because I lacked the imagination to call it anything else!). 

British Telecom's R&D team had got wind of our idea and invited us to write a proposal that outlined the concept - a few weeks later we started to develop the early prototypes of what would become the Vi SixD SaaS platform - the world's first Cloud-based, user configurable, 'track-anything' service.

Jumping forwards several years, like most start-ups, VI SixD didn't make us a fortune - mostly because RFID didn't really take off as predicted. SixD did, however, earn its spurs with a few niche clients, most notably a specialist logistics firm who provide services to the US military; SixD was used to track supplies from the USA going to war zones in the Middle East for several years.  I'd do it all over again, given the chance!  It turns out the design patterns we developed in its gestation have been extremely useful in a wide variety of contexts - so much so that Carl Bate and I ended up describing how they applied to information sharing challenges in the UK's Criminal Justice System and helped transform a retail bank, among many others. It also introduced me to the concept of Event Processing (complex and simplified) and I was able to reuse event design patterns at Royal Mail and Yodel.

Early in 2013, l found myself back Hong Kong and working within the Energy industry with a focus on Smart Grid. Energy companies worldwide are scrambling to execute technology pilots of a new breed of machine-to-machine devices that will make the power grid more resilient, bi-directional and smart enough to conserve energy and, at the same time, meet growing demand & meet emission targets. 

So, here I am, back working in the IoT/m2m space. This time however, I'm focused on business transformation and staging the deployment of a complex Smart Grid architecture over the next 10+ years. Hopefully, I'll be retired by the time its fully realised, but I'm happy to be playing a part in making IoT a reality.

Anyone want to chat about the subject - please feel free to comment! Thx.



Thursday, 6 March 2014

Whole-Brained Business Analysis - New Metaphor Required


I've been guilty using the much debated 'Left vs Right brain' metaphor to explain what I believe is needed. By way of example, Alec Sharp (@alecsharp), Sally Bean  (@Cybersal), Roy Grubb  (@roygrubb) and I have been Tweeting about Concept Modeling vs Concept Mapping. Alec is keen to get Data Modelers to abstract their thinking up from physical Data Models by thinking conceptually and I have been encouraging Business Analysts to think similarly when gathering requirements. This has meant that we both find that we need to introduce a different mindset: one that encourages more creative & inclusive discussion atthe initial   discovery and play-back stage of the Requirements-Solution Design journey. I expect the Agile/XP community will declare this to be their philosophy (and nothing new) and they're probably right. But rather than get caught-up in 'IT-centric' methods, I'd rather think of it as a way to better understand any requirements for change - regardless of the Software Development Life-Cycle. I'd rather see such thinking applied to all aspects of business change - people, process, practice, policy and ... technology.

Tried-and-tested analytical techniques should not be abandoned, they just need to be augmented with others that, in my experience, help expand ideas and produce resilient, coherent and business-value-creating solutions.  Both side of the equation are equally important. However, I'm finding (through experiment) that the more creative techniques are more engaging - simply more fun and inclusive - and, this alone, can, in my recent experience, dramatically improve business outcomes. 

In attempts to explain the need for a more 'whole-brained' approach, I've been following the lead of the 'Design Thinking' community in referring to both Theory X and Theory Y from MIT Sloan and the Left-brain Right-brain metaphor. This, however, is fraught with problems due, in large part to the findings of the University of Utah who debunk such binary thinking (as I was reminded by Rob England - @theitskeptic).

So I'm in a quandary: on the one hand I find that an X-Y, Left-Right, metaphor is a simple way to convey the difference between, say, Analysis vs. Synthesis, on the other hand, however, I run the risk of aligning with outdated concepts being fundamental reconsidered by neuroscientists. 

I guess the Complexity Science community might say that I'm talking about the difference between 'Complex Adaptive'  vs. 'Complicated' systems, but, again, academic debate makes coming up with a simple metaphor next to impossible.

Has anyone found an alternative metaphor for a more balanced approach to Business Analysis and Enterprise Architecture?

Importantly, I'm keen to avoid the impression that people are to be seen as fundamentally one way or another. My observation is that it is the practice of Business Analysis/Enterprise Architecture that needs to be more 'Whole-brained' - not the individuals per se.

To get the discussion rolling, I'd like to hear views on:
  • A good Business Analyst or Enterprise Architecture must be a balance of Left-X(Reliability - Doing-things-Right) and Right-Y (Validity – Doing-the-right-thing)
  • We've spent to much time of methods that attempt to industrialise EA (the TOGAF 9.0 manual runs to around 800 pages in the attempt) and BAs are too often focused on methods focus on an 'IT solution' rather that the Whys and Whats of the current or desired business behavior
  • We need to spend more time on developing pattern-based storytelling skills in BAs and EAs to deliver break-through changes and allow for innovation in TO-BE models.
  • Economic churn and environmental challenges warrant more Y-minded thinking (with appropriate X-controls)
  • The world can't be fully explained or governed algorithmically (thank god!)– not while values and trust dominate the way organisations function.


 

Wednesday, 15 January 2014

Re-purposing the Technical Debt Metaphor


Recently, I had cause to re-visit the ‘Technical Debt’ metaphor coined by Ward  Cunningham when referring to agile software development:


I am finding, however, it applies to a much broader set of circumstances such as: unmanaged introduction of Consumer-grade I.T., Line-of-Business ‘Credit -Card-Cloud’ consumption and 'technology-solution-without-a-requirement-and/or-architecture’ implementations. So I’ve had a go a rewriting Ward’s original words:-

“Technical debt is a metaphor an incremental, get-something-started approach with the easy acquisition of money through fast loans.

A monetary loan, of course, has to be paid back with interest. In terms of software development & technology selection &  deployment, payback requires the technicians to re-work the solution as they learn more about how it interacts with other technologies and which features are being used, or not, or are now needed. Just as monetary debt can easily spiral out of control if not managed properly, so can technical debt.

In business, the metaphor is often used to illustrate the concept that an organization will end up spending more in the future by not having sufficient understanding the complete requirements before selecting a solution. The assumption is that if an organization chooses to ignore a course of action it knows should be taken, the organization will risk paying for it in terms of time, money or damage to the organization's reputation in the future. As time goes by, efforts to go back and address the missing requirements may become more complicated and, otherwise, messy. Eventually the problem may reach a tipping point and the organization must then decide whether or not to honour its original debt and continue investing time and effort to fix the problem. This decision can be made more difficult by something called ‘the sunk cost effect’, which is the emotional tendency of humans to want to continue investing in something that clearly isn't working (e.g. can’t scale or missing features)”.

Anything you’d add/change/delete?