Data Scientists Intern
Avalanche Computing deep learning data scientists deliver amazing quality models and algorithms by combining state-of-the-art computer vision, image processing, time series analysis, and deep learning. As a data scientist/deep learning intern on our team, you'll help to develop and extend these algorithms, working with the customers who made vertical AI in medical, manufacturing, construction, and so on. And make the AI model spanning the border of different cloud platforms and edges on our SaaS product with our full-time data scientists.
In 5 years, whenever you see an "AI for all - application" advertisement, you see our customers; it could be your contribution too.
You have demonstrated the fundamentals to build algorithms driven by machine learning, computer vision, deep learning, time-series data analysis and deep learning knowledge.
You have experience in Python and relevant AI frameworks (e.g., Numpy, TensorFlow, PyTorch, OpenCV).
It is also valuable with coding skills in C, and C++ as well as experience with Unix/Linux and software development practices (documenting, debugging, testing, maintaining).
Good skill on verbal and written communication skills.
You're a passionate, team worker, and seek to solve the most difficult problems in innovative ways
Note: this position will compete with global talents.
(We will only contact the qualified candidate to join the interview).
The process at Avalanche Computing
The Avalanche Computing Taiwan Inc. is funded by Nvidia USA alumni in Santa Clara, CA, USA, 2018. At the end of 2018, the core team was built in Taipei, Taiwan. After 1 year of collaboration with many enterprises, Avalanche Computing Taiwan Inc. is registered in Feb 2020. In the 1st month, we won the Airbus global top-10 innovation startup in March, joined NVIDIA global inception program, and join NTUTEC.
The barrier to entry in AI is lower than ever before because of the open-source software including a number of frameworks (tensorflow, keras, pytorch, et., al.). However, to develop the specific AI application, the company engineers need to build the data pipeline, computing environment, and AI models. Those processes are still difficult for traditional companies and SMEs.
In order to overcome the challenges above, we provide the hyper-scale computing technique for doing deep learning. The hyper-scale computing framework is a deep-tech for computing technology. Through our hyper-scale technology, the clients can focus on innovation and then we do rest (for example, we provide hyper-scale architecture, multi-GPUs, and distributed model training, and hyper-scale inference on 10000+ edge devices). Then, our clients just need to put their AI models into our SaaS AI development tool in mins.
We also won several awards:
Our office is now in the follows:
Now, the SaaS product is launched in the U.S.
If you are interested in our position, please send your CV to: [email protected]