KONA Story
home
세상을 이롭게하는 기술
home

KONA PLATE 이해하기

LISTEN: Podcast | Youtube | LinkedIn
LINKS 
→ Vid Jain (LinkedIn)
→ “Startup Wallaroo Labs wins Space Force contract to model performance of AI on edge devices” (SpaceNews)
KEY POINTS
Most companies wait too long to think about production. Start before you feel you need to do it.
Issues to look for when operationalization AI models: 1) Production environment, 2) integration, and 3) data pipelines.
ML Ops empowers data scientists to build better models, go faster, and limit business risks.
Edge data/predictions must be incorporated into a central AI program to enable cross machine insights.
NOTES
Data + AI = Value — not so fast. Not so easy because of deployment issues.
Predictive analytics is a widely applicable use case. From space, to automotive and manufacturing.
Wallaroo helps data scientists to operationalize AI. From the lab to a production environment.
You can’t do “set and forget” with data science models. Requires constant monitoring.
Issues to look for when operationalization AI models
1.
Production environment — what is the hardware that will be hosting the model.
2.
Integration — How will you model input data and output insights in a production setting?
3.
Data Pipelines — Do you have the data streams expected? How are you monitoring the health of those data pipelines?
How MLops helps different personas
1.
Data scientists — spend more time on model development, rather than deployment/monitoring.
2.
Business Unit Leaders & MLOps engineers— Do more with less resources. Faster.
Empower data scientists to
1.
Be bolder — build more interesting and sophisticated models.
2.
Go Faster — Shrink innovation cycle times.
3.
Limit Business Risk — Minimize translation errors in going from data science to production.
Most companies wait too long to think about production. Start before you feel you need to do it.
Appreciate that your core IP is in data and process. Monitoring and deployment, look for tools like Wallaroo.
Thinking about benchmarking:
1.
“Bake-off” — Comparing models to one another. Swap between different models based on the value of predictions/insights.
2.
Raw Performance: Throughput, latency, etc. Compare across various production deployments.
Edge vs. centralized managed AI. If you do inference at the edge, but aren't integrating back together in a centralized place in order to learn across deployments, you’re leaving “money on the table.”