Amazon Web Services launched AWS Kinesis in 2013 as a fully managed service for real-time processing of streaming data at scale. It can collect and process petabytes of data per hour from millions of sources, enabling applications to be written that take action on or because of that streaming data. Early use cases included web site click-stream processing, marketing and financial applications, manufacturing instrumentation, and social media. Like other AWS capabilities, it is provided in a way that is economical and easy for developers to use.
The world has evolved over the past five years. The explosion of data we all saw coming is here. And the rapid uptake of Internet of Things and Industrial Internet of Things are two of the key drivers of that massive increase in data. IoT and IIoT also need new control solutions and that has been a big trend over the last several years as well. Another huge change has been the proliferation of video as a source. Just about every mobile device can do video now. And just about every company has video surveillance and security cameras. So do local, state and federal governments. Other changes in our ecosystem have been a continuous improvement in voice recognition technologies and working machine learning and artificial intelligence solutions.
And on top of all that, AWS has continued to evolve and field more and more services and improve existing capabilities. So, with all that in mind, consider what Kinesis is and can do.
Kinesis remains focused on real time action over streaming data, and it is fully managed, meaning it does not require you to have any infrastructure. Data stream processing enables action over live data. Data firehose capabilities enable the rapid ingest of data into data stores (including transforming of data) for near real-time analytics. Kinesis also has built in analytical capabilities. All the above is a continued and logical improvement of Kinesis from where it began. The most interesting capability, to me, is the new Kinesis Video Streams capability.
A great overview of Kinesis video streams was provided at the Dec 2017 reInvent:
Kinesis Video Streams enables the secure capture, process and storage of video. And it does it in a way that enables fast, smart machine learning over the video streams. It can do this over any time encoded data, video, lidar, radar, satellite.
Applications can now be easily built that analyze streaming video in real time and take action based on what is observed. The first use cases that come to mind are probably security related. Using video from security cameras as input to Kinesis can enable capabilities like license plate recognition and then deliver applications that can find bad guys quicker.
But there are far many other use cases. Consider the massive quantities of data now being generated by commercial drones in support of agricultural business or the oil and gas industry or forestry management. I have seen other use cases in the medical and advertising industries. These can all feed video and imagery into Kinesis for machine learning to generate actionable results quick. And consider rapid analysis of video and streaming imagery for disaster recovery. Imagery and streaming video of disaster areas can be quickly fed into Kinesis for machine learning to rapidly assess where first responders should focus and then also create information to support insurance claims and recovery. Think of the video and data around smart cities. Kinesis can make smart cities smoothly function.
Really in a video enabled world the use cases are unlimited.
Now consider the voice recognition and AI trends we have all seen underway. I could not find a precise figure on this, but my guess is that there are 40 million Amazon Echo smart speaker/assistants connected to the Amazon cloud right now. Amazon has mastered voice recognition and machine learning over voice input at scale. And with Kinesis, they have mastered machine learning over streaming video at scale. Imagine what you could do if those worked together? Well, both are AWS based solutions and both have well documented development kits and APIs so the future here is really whatever you can imagine. Imagine turning to the Alexa enabled device at work and asking for video from the loading doc that showed suspicious activity. Or asking by voice which houses after the storm would probably need their roofs replaced.
Also consider Amazon Go. You may have seen the Amazon Go video, this is the one that is a store where no one needs to stand in line to pay for what they want. Just shop and leave and get an accurate charge for what you bought. The way Amazon Go works is a fusion of information from many sensors with computer vision, machine learning and AI at its core. This is all based on Kinesis. And all of this is available for any developer to build apps with. Any store anywhere can be like this. So can any factory or warehouse anywhere.
For more on Amazon Go see:
What next? My recommendation is to start imagining solutions that integrate video, machine learning, voice recognition, robotics, legacy data and streaming data into capabilities that can drive decisions and automated actions. After you let your imagination run for a while start asking your developers what they think about your ideas and see if Kinesis is a fit for making them a reality.
Be sure you are signed up to our special reports to track more on this and related topics. Find them at our Newsletter Subscription Page.