Skip to content
Home » How Boston Dynamics and AWS use mobility and computer vision for dynamic sensing

How Boston Dynamics and AWS use mobility and computer vision for dynamic sensing

  • by
How Boston Dynamics and AWS use mobility, Massive instrumentation in industrial facilities must frequently be monitored for chronic trends.

How Boston Dynamics and AWS use mobility, Massive physical instrumentation in industrial facilities must frequently be monitored for chronic trends or emerging issues. 바카라사이트

Attempting a digital transformation to speed up this process necessitates the deployment of hundreds of thousands of individual discrete sensors, as well as a complex intercommunication system.

However, in the world of dynamic sensing, the same asset management task could be accomplished with a much smaller set of sensors and virtually no communications infrastructure.

AWS and Boston Dynamics are collaborating to bring this dynamic sensing capability to life by deploying sensors via mobile robots and using AWS services to transform data into critical insights for industrial teams.

Agile Mobile Robots for Automation

Dynamics is a quadruped robot that can be programmed to collect data in hazardous environments without the need for human intervention.

Spot can be controlled with a joystick via the Spot tablet, and an Autowalk mission – a pre-programmed route

That Spot can navigate using obstacle avoidance and autonomous capabilities – can be recorded.

Spot can be programmed by operators using an API interface that allows developers to acquire, store, and retrieve sensor or camera data.

Commanded to repeat an Autowalk mission without an operator explicitly driving the robot along the mission route once it has been recorded.

Spot is capable of detecting hotspots in power generation facilities, gas leaks on oil rigs, and other applications

And is ready to be integrated with various technology platforms via the Spot software development kit (SDK), a Python-based kit.

AWS and Machine Learning

Spot is being used by customers for inspections using machine-learning (ML) powered insights in less-than-ideal environments for human workers.

Spot can be assigned to a mission that includes designated inspection points

Such as areas with valves or meters that would otherwise require on-site monitoring.

Spot can capture local imagery with either the standard robot cameras or an optional Spot CAM

With panoramic and 30x optical pan-tilt-zoom capabilities when triggered by a Data Acquisition Plugin during a mission.

A computer vision (CV) ML model can then be used to process the captured imagery data, such as detecting whether valves are open or closed.

Detections from the ML inference can then be saved onboard the robot until Spot returns to an area with a network connection.

The deployment of software and machine learning models to a Spot with intermittent network connectivity

Can be automated and orchestrated using AWS IoT Greengrass 2.0 and Amazon SageMaker Edge Manager.

Data Processing Streamlining

To gain meaningful insights from raw sensor data, it is necessary to develop ML models through lengthy training processes and iterations that use large amounts of data. 카지노사이트

Once these models are built, making them useful necessitates significant

Computational resources in order to identify patterns in the shortest amount of time.

This iterative process of collecting, training, testing, and retraining is required to ensure that the models in use are as accurate as possible.

A developer can take a few shortcuts in a computer vision workflow, but there are also ways to streamline it, such as:

During the collection phase, automate the upload and storage of raw images to create a large selection pool for tagging.

Iterating through annotated images and training a model using massive parallel compute resources (i.e. object detection or image classification)

Having an automated pipeline for testing new models with real-world data so that inaccuracies can be identified quickly and eliminated with subsequent model re-training.

Numerous Integration Possibilities

Spot is unique in the world of computer vision because it is designed to navigate through our physical world.

WiFi is frequently non-existent or unreliable in the industrial environments where it is commonly found.

How can developers then train, test, and retrain their models with a robot that may only be reachable on occasion?

Furthermore, how does a remote sensing application use cloud computing without constant internet access?

Overcoming these challenges in order to operationalize a computer vision-based solution requires running

Inference on the edge while using asynchronous IoT messaging to cloud storage and compute resources.

The ideal outcome is a pipeline that allows for asynchronously collecting raw images,

Cloud computing for model building from image tagging

And an automated delivery process of the iterated model back to the edge for testing.

AWS IoT Greengrass 2.0 is an open-source edge runtime that works

With Spot compute payloads, And allows for the delivery and execution of applications or ML models on the robot.

Spot can also send data back to the cloud using AWS IoT Greengrass, which allows for a variety of use-cases.

Developers can use AWS IoT Greengrass to create custom components that use

The Spot SDK to capture imagery or sensory data, perform ML inference, and report detections back to the cloud.

Integrating Amazon SageMaker Edge Manager into the solution allows developers to optimize and package models for Spot compute payloads

As well as run inference without having to install their preferred ML framework and libraries

(e.g., PyTorch, TensorFlow, MXNet), giving them the flexibility to train whichever model meets their needs.

AWS IoT Greengrass 2.0 and Amazon SageMaker Edge Manager

Offer a unified delivery system that enables all of the links in this development chain to be realized on an edge computing device. 카지노 블로그

Meanwhile, Spot’s agile platform allows for repeatable, autonomous data collection, which can then be acted

On through the robot’s API. Spot and AWS form an end-to-end solution for making AI literally strut its stuff.

Leave a Reply