GSA Silicon Summit: Focus on Edge Computing, AI/ML and Vehicle to Everything (V2X) Communications

Introduction:

Many “big picture” technology trends and future requirements were detailed at GSA’s Silicon Summit, held June 18, 2019 in Santa Clara, CA.  The conference was  a “high level” executive briefing for the entire semiconductor ecosystem- including software, middleware and hardware.  Insights on trends, key issues, opportunities and technology challenges (especially related to IoT security) were described and debated in panel sessions.  Partnerships and collaboration were deemed necessary, especially for start-ups and small companies, to advance the technology, products and services to be offered in this new age of AI, ML/DL, cloud, IoT, autonomous vehicles, (fake) 5G, etc.  Companies involved in the development of next generation Mobility and Edge Intelligence systems architectures and solutions discussed what opportunities, advancements and challenges exist in those key areas.

With the rapid proliferation of smart edge computing devices and applications, the volume of data produced is growing exponentially. Connected, and “intelligent,” devices are predicted to grow to 200 billion by 2020, generating enormous amounts of data every single day. The business potential created by this data comes with huge expectations.  Edge devices, edge intelligence, high bandwidth connectivity, high performance computing, machine learning and other technologies are essential to enabling opportunities in markets such as Mobility and Industrial IoT.

This article will focus on Edge Computing, AI moving closer to the endpoint device (at the network edge or actually embedded in the end point device/thing), and vehicle to vehicle/everything communications.

While there were many presentations and panels on security, that is beyond the scope of the IEEE ComSoc Techblog.  However, we share Intel’s opinion, expressed during a lunch panel session, that standards for Over The Air (OTA) security software/firmware updates are necessary for almost all smart/intelligent devices that are part of the IoT.

Architectural Implications of Edge Computing, Yogesh Bhatt VP of Products- ML, DL and Cognitive Tech Ericsson – Silicon Valley:

Several emerging application (data flow) patterns are moving intelligence from the cloud to local/metro area to on premises and ultimately to the endpoint devices.  These applications include: cloud native apps like content delivery; AI enabled apps like sensing, thinking and acting; immersive apps like media processing/augmentation/distribution.

AI enabled Industrial apps are increasing.  They were defined as: The ability to collect and deliver the right data/video/images, at the right velocity and in the right quantities to wide set of well-orchestrated ML-models and provide insights at all levels in the operation.  Connectivity and compute are being packaged together and offered as “a service.”  One example given was 4K video over (pre-standard) “5G” wireless access at the 2018 U.S. Open.  That was intended to be a case study of whether 5G could replace miles of fiber to broadcast live, high definition sports events.

Yogesh Bhatt VP of Products- ML, DL and Cognitive Tech Ericsson – Silicon Valley

Image courtesy of GSA Global

……………………………………………………………………………………………………………………………………………………………………………………………..

Required Architecture for Emerging App Patterns: Application Cloud, Management & Monetization Network slices, Mobile Fixed Cloud infrastructure, Distributed Cloud and Transport.   The flow of emerging apps requires computing capability to be distributed based on the application pattern and flow.  That in turn mandates cross-domain orchestration and automation of services.

Key take-aways:

  • Emerging Application patterns will require significant compute capabilities close to the data sources and sinks (end points)
  • Current Device-to-Cloud Architecture need to expand to encompass hosting points that provides such processing capabilities
  • The processing capabilities at these Edge locations would be anything but like the centralized Cloud Data Centers (DCs)

………………………………………………………………………………………………………………………………………………………………………………………………..

Heterogeneous Integration for the Edge, Yin Chang Sr. VP, Sales & Marketing ASE Group:

ASE sees the “Empowered Edge” as a key 2019 strategic trend.  Edge computing drivers include: latency/determinism, cost of bandwidth, better privacy and security, and higher reliability/availability (connections go down, limited autonomy).

  • At the edge (undefined where that is -see my comment below) we might see the following: Collect/Process data, Imaging Device, Image processing, Biometric Sensor, Microphone, Sensors with embedded MCUs, Environmental Sensor.
  • At the core (assumed to be somewhere in the cloud/Internet): Compute/Intelligent processing, AI & Machine Learning, Networks/Server Processors, High Bandwidth Memory (HBM), Neuro-engine (future), Quantum computing (future).

Compute capabilities are moving to the edge and endpoints:

  • Edge Infrastructure and IoT/Endpoint Systems are growing in compute power per system.
  • As the number of IoT/Endpoint systems outgrows other categories, TOTAL Compute will be at the Endpoint.

Challenges at the Edge will require a cost effective integration solution which will need to deal with:

  • Cloud connectivity – latency and bandwidth limitations
  • Mixed device functionality – sense, compute, connect, power
  • Multiple communication protocols
  • Form factor constraints
  • Battery life
  • Security
  • Cost High density

ASE advocates Heterogeneous Integration at the Edge— by material, component type, circuit type (IP), node and bonding/ interconnect method.  The company has partnered with Cadence to realize System in Package (SiP) intelligent design with “advanced functional integration.”  That partnership addresses the design/verification challenges of complex layout of advanced packages, including ultra-complex SiP, Fan-Out and 2.5D packages.

One such SiP design for wireless communications is antenna integration:

  • Antenna on/in Package for SiP module integration
  • Selective EMI Shielding for non-limited module level FCC certification
  • Selective EMI Shielding  – partial metal coating process by sputter for FCC EMI certification
  • Small Size Antenna Integration – Chip antenna, Printed circuit antenna (under development)

…………………………………………………………………………………………………………………………………………………………………………………………………………………………….

Democratizing AI at the Endpoint, Brian Faith, CEO of QuickLogic:

QuickLogic was described as “a platform company that enables our customers to quickly and easily create intelligent ultra-low power endpoints to build a smarter, more connected world.”  The company was founded in 1989, IPO in 1999, and now has a worldwide presence.  Brian said they were focused on AI for growth markets including:

▪ Hearable/Wearable
▪ Consumer & Industrial IoT
▪ Smartphone/Tablet
▪ Consumer Electronics

AI and edge computing are coming together such that data analytics is moving from the cloud to the edge to the IoT endpoint (eventually).  However, there are trade-offs for where computing should be located which are based on the application type.  Some considerations include:

▪Applications latency & power consumption (battery life) requirements
▪Data security can be a factor
▪Local insights are trivial and non-actionable
▪Smart Sensors => rich data => actionable if real-time
▪Network sends insightful data (less bandwidth needed)
▪Cloud focuses on aggregate data insights and actions

AI Adoption Challenges:

1.   Resource-Constrained Hardware:

▪ Can’t just run TensorFlow
▪ Limited SRAM, MIPS, FPU / GPU
▪ Mobile or wireless battery/power requirements

2.  Resource-Constrained Development Teams:

▪ Embedded coding more complex & fragmented than cloud PaaS
▪ Scarcity of data scientists, DSP, FPGA and firmware engineers
▪ Limited bandwidth to explore new tools / methods

3.  Lack of AI Automated Tools:

• Typical process: MATLAB modeling followed by hand coded C/C++
• Available AI tools focus on algorithms, not end-to-end workflows
• Per product algorithm cost: $500k, 6-9 months; often far greater

For Machine Learning (ML) good training is vital as is the data:

• Addresses anticipated sources of variance
• Leverages application domain expertise
• Includes all potentially relevant metadata
• Seeks optimal size for the problem at hand

ML Algorithms should fit within Embedded Computing Constraints:

Endpoint Inference Models:

• Starts with model appropriate to the problem
• Fits within available computing resources with headroom
• Utilizes least expensive features that deliver desired accuracy

SensiML Toolkit:

• Provides numerous different ML and AI algorithms and automates the selection process
• Leverages target hardware capabilities and builds models within its memory and computing limits
• Traverses library of over 80 features to optimize selection to best features to fit the problem

A Predictive Maintenance for a Motor Use Case was cited as an example of AI/ML:

Challenges:

▪Unique model doesn’t scale across similar motors (due to concrete, rubber, loading)
▪ Endpoint AI decreases system bandwidth, latency, power

Monitoring States:

▪ Bearing / shaft faults
▪ Pump cavitation / flow inefficiency
▪ Rotating machinery faults
▪ Seismic / structural health monitoring
▪ Factory predictive maintenance

QuickLogic aims to democratize AI-enabled SoC Design using SiFi templates and a cloud based SoC platform with a goal of a custom SoC in 12 weeks!  In 2020 the company plans to have: an AI Software Platform,  SoC Architecture, and eFPGA IP Cores.  Very impressive indeed, if all that can be realized.

……………………………………………………………………………………………………………………………………………………………………………………………………………………

Empowering the Edge Panel Session:

Mike Noonen of Mixed-Com chaired a panel discussion on Empowering the Edge.  Two key points made was the edge computing is MORE SECURE than cloud computing (smaller attack surface) and that as intelligence (AI/ML/data processing) moves to the edge, connections will be richer and richer.  However, no speaker or panelist or moderator defined where the edge actually is located? Is it on premises, the first network element in the access network, the mobile packet core (for a cellular connection), LPWAN or ISP point of presence?  Or any of the above?

Mike Noonen of Mixed-Com leads Panel Discussion

Photo courtesy of GSA Global

……………………………………………………………………………………………………………………………………………………………………………………………………………………

After the conference, Mike emailed this to me:

“One of the many aspects of the GSA Silicon Summit that I appreciate is the topic/theme (such as edge computing). The speakers and panelists addressing the chosen theme offer a 360 degree perspective ranging from technical, commercial and even social aspects of a technology. I always learn something and gain new insights when this broad perspective is presented.”

I couldn’t agree more with Mike!

…………………………………………………………………………………………………………………………………………………………………………………………………………………………..

V2X –Vehicle to Everything connectivity, Paul Sakamoto, COO of Savari:

V2X connectivity technology today is based on two competing standards: DSRC: Dedicated Short Range Communications (based on IEEE 802.11p WiFi) and C-V2X: Cellular Vehicle to Everything (based on LTE).  Software can run on either, but the V2X connectivity hardware is based on one of the above standards.

DSRC: Dedicated Short Range Communications: 

  • Legacy Tech – 20 years of work, Low Latency Performance Range and reliability
  • No carrier fees; minimize fixed cost
  • Infrastructure needs; how to pay?
  • EU Delegate Act win, but  5GAA is contesting

C-V2X: Cellular Vehicle to Everything:

  • Developed from LTE-Big Money Backing
  • Cellular communications history; good range and reliability
  • Carrier fees required;  subsidy for fixed costs
  • Mix in with base stations to amortize costs
  • China has chosen it as part of the government’s 5G plan

V2X Challenge: Navigate the Next 10 Years:

For mobile use, the main purpose is safety and awareness:
• Tight message security
• Low latency (<1ms)
• Needs client saturation
• Short range

For infrastructure, the main purpose is efficiency and planning:
• Tight message security
• Moderate latency (~100ms)
• Needed where needed
• Longer range

In closing, Paul said V2X is going to be a long raise with many twists and turns.  Savari’s strategy is to be ”radio agnostic,” use scalable computing and scalable security elements, have a 7-10 year business plan with a 2-3 year product development cycle, and be ready to pounce at any inflection point (which may mean parallel developments).

………………………………………………………………………………………………………

 

Recent Posts