🎯𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:
✅Design & implement end-to-end machine learning solutions to process & analyze large-scale datasets efficiently
✅Build robust data pipelines for high-volume data ingestion, transformation, & storage
✅Create advanced reporting systems & dashboards powered by AI to deliver actionable insights
✅Develop & deploy custom AI/ML models tailored to business requirements, including LLMs & generative AI solutions
✅Leverage frameworks like LangChain, LlamaIndex, & Hugging Face to implement & fine-tune large language models
✅Build systems to automate data-driven decision-making using predictive analytics & recommendation engines
✅Monitor, troubleshoot, & optimize AI/ML models in production environments
✅Collaborate with cross-functional teams to identify opportunities to enhance reporting & analytics capabilities with AI
✅Ensure scalability, reliability, & security in handling vast amounts of data
✅Maintain comprehensive documentation for data workflows, AI systems, & reporting tools
𝐏𝐫𝐨𝐟𝐞𝐬𝐬𝐢𝐨𝐧𝐚𝐥 𝐐𝐮𝐚𝐥𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧 & 𝐄𝐱𝐩𝐞𝐫𝐢𝐞𝐧𝐜𝐞
🎓Minimum Bachelor’s, in a STEM field (Science, Technology, Engineering, or Mathematics)
💼At least 5 years of experience in machine learning, AI & large-scale data processing
✅Python, SQL, and data processing frameworks (Pandas, NumPy, PySpark, Dask)
✅Big data tools (Hadoop, Apache Spark, or similar)
✅Machine learning frameworks (TensorFlow, PyTorch, or similar)
✅Building and deploying large language models (LLMs) and generative AI systems
✅REST APIs and microservices architecture for scalable AI solutions
✅Version control (Git) and CI/CD pipelines for MLOps
🎖️𝐊𝐞𝐲 𝐒𝐤𝐢𝐥𝐥𝐬 & 𝐂𝐨𝐦𝐩𝐞𝐭𝐞𝐧𝐜𝐢𝐞𝐬:
✅Model optimization & monitoring in production environments
✅NLP & generative AI techniques for text analysis & automation
✅Database management (SQL and NoSQL)
✅Strong mathematical & statistical foundation
✅Expertise in building data-driven reporting systems & tools
✅Knowledge of cloud platforms (AWS, Google Cloud, or Azure)
✅Understanding of data security & compliance best practices
✅Experience with distributed computing & big data architectures
✅Knowledge of A/B testing & experimental design for data-driven insights
✅Proficiency in containerization & orchestration (Docker, Kubernetes)
✅Published research or contributions to AI/ML open-source projects
✅Experience with advanced model optimization & hardware acceleration
✅Data visualization tools (Tableau, Power BI, or similar)
✅Real-time and batch processing of large datasets
✅Proven ability to process, analyze, and report on large-scale datasets
✅Hands-on experience with LLM optimization and generative AI pipelines
✅Familiarity with LangChain, LlamaIndex, Hugging Face, and similar LLM-focused tools