Machine Learning Research Engineer (Optimization)
-
Location
Boston, Massachusetts
-
Sector:
-
Job type:
-
Salary:
Negotiable
-
Contact:
Molly Boca
-
Contact email:
mboca@understandingrecruitment.com
-
Job ref:
----_1585257708
-
Published:
10 months ago
-
Expiry date:
2020-04-25
-
Consultant:
#
Machine Learning Research Engineer (8-bit Quantization, Optimization)
We are currently seeking a Machine Learning Research Engineer (8-bit Quantization, Optimization) to join an innovative MIT spin-off that has dedicated its work to building technology that matters. As a Machine Learning Research Engineer in our company, you will join us in continuing our path of technological breakthroughs within modern-day computing through the use of Deep Learning applications in a way that has never been done before.
Located in the Greater Boston, this post series A startup is keen on bringing in talented researchers and scientists to aid in the production of high-performance computer hardware that is built with the intent of optimizing the speed and accuracy of the traditional computational architecture.
Our ideal ML Research Engineer would be a well-rounded individual with a background in Machine Learning Research and Software Engineering, ideally having experience working with quantization/pruning/model compression/model optimization.
We can offer our Machine Learning Research Engineer:
- A chance to develop the next generation of computing architecture
- A culture rooted in diversity that offers our employees the opportunity to work with some of the best scientists, engineers, and coders from around the globe
- An extremely energetic work environment with a mature start-up feel
- Expansive benefits coupled with the promise of exciting technical innovation
Key Skills: ML conferences, NeurIPS, NIPS, ICML, ICLR, CVPR, SysML, CPU, GPU, NN, Machine Learning, algorithms, models, training, optimization, computer science, CC++, C, Python, software engineering, data scientist, neutral network, 8 bit quantization, quantized models, quantization, AWS, Azure, Pyspark, Tensorflow, Pytorch, floating point, inference, low precision
