Speaker
Until February 11:
✓ Transformation Day for free
✓ Team discounts
✓ Save up to £330
Until February 11:
✓ Transformation Day for free
✓ Team discounts
✓ Save up to £330
✓ Transformation Day for free
✓ Raspberry Pi or C64 Mini for free
✓ Save over 850 €
Register now
✓ Transformation Day for free
✓ Raspberry Pi oder C64 Mini for free
✓ Über 850 € sparen
Jetzt anmelden
✓ Workshop Day for free
✓ Raspberry Pi or C64 Mini for free
✓ Save over $840
Register now
✓ Workshop Day for free
✓ Raspberry Pi or C64 Mini for free
✓ Save over $840
Register now
✓ Kollegenrabatt
✓ Workshop Day for free
✓ Raspberry Pi or C64 Mini for free
✓ Save up to $690
Register now
✓ Workshop Day for free
✓ Raspberry Pi or C64 Mini for free
✓ Save up to $690
Register now
Infos
Description
Deep learning achieves the best performance for many computer vision, natural language processing, and recommendation tasks and thus it’s becoming increasingly more popular. However, it’s quite difficult to use deep learning in production as it requires a lot of effort to develop proper infrastructure for serving deep learning models.Platforms for serverless computing, such as AWS Lambda, provide a good alternative: they take care of scaling up and down and offer attractive pricing based only on actual usage. These platforms, unfortunately, have other limitations that make it problematic. In this talk, we show how to come around these limitations and be able to use AWS lambda and TensorFlow to serve deep learning models. We also discuss important maintenance aspects such as cost optimization, monitoring, deploying, and release management. Finally, we cover the limitations of AWS lambda, compare it with “serverful” solutions, and suggest workloads for which serverless is not the best option.