AI Engineer

Build AI Agents and Intelligent Systems for Global Brands and Enterprises

AI Engineer
Mindstix is seeking AI Engineers to build Generative AI applications such as LLM‑powered AI Agents, Copilots, and Automation Workflows. This is a hands-on engineering role focused on leveraging frontier AI models to create secure, reliable, and scalable Agentic AI solutions for the modern enterprise. You will work on our cutting-edge AI stack, cloud-native AI platforms, and large-scale datasets to deliver innovation for leading global companies.

Roles and Responsibilities

Integrate with modern LLMs and foundation models to design and build next-gen applications: AI agents, enterprise copilots, conversational AI bots, personalization and recommendation engines, and insights engines.
Integrate with modern LLMs and foundation models to design and build next-gen applications: AI agents, enterprise copilots, conversational AI bots, personalization and recommendation engines, and insights engines.
Integrate with modern LLMs and foundation models to design and build next-gen applications: AI agents, enterprise copilots, conversational AI bots, personalization and recommendation engines, and insights engines.
Integrate with modern LLMs and foundation models to design and build next-gen applications: AI agents, enterprise copilots, conversational AI bots, personalization and recommendation engines, and insights engines.
Integrate with modern LLMs and foundation models to design and build next-gen applications: AI agents, enterprise copilots, conversational AI bots, personalization and recommendation engines, and insights engines.
Build and optimize RAG pipelines using modern AI orchestration frameworks - Data ingestion, chunking, embeddings, vector storage, prompt engineering, retrieval strategies, and response synthesis.
Build and optimize RAG pipelines using modern AI orchestration frameworks - Data ingestion, chunking, embeddings, vector storage, prompt engineering, retrieval strategies, and response synthesis.
Build and optimize RAG pipelines using modern AI orchestration frameworks - Data ingestion, chunking, embeddings, vector storage, prompt engineering, retrieval strategies, and response synthesis.
Build and optimize RAG pipelines using modern AI orchestration frameworks - Data ingestion, chunking, embeddings, vector storage, prompt engineering, retrieval strategies, and response synthesis.
Build and optimize RAG pipelines using modern AI orchestration frameworks - Data ingestion, chunking, embeddings, vector storage, prompt engineering, retrieval strategies, and response synthesis.
Finetune or adapt foundation models with instruction tuning, hyperparameter tuning, sampling, and retrieval strategies for specific customer domains and use cases.
Finetune or adapt foundation models with instruction tuning, hyperparameter tuning, sampling, and retrieval strategies for specific customer domains and use cases.
Finetune or adapt foundation models with instruction tuning, hyperparameter tuning, sampling, and retrieval strategies for specific customer domains and use cases.
Finetune or adapt foundation models with instruction tuning, hyperparameter tuning, sampling, and retrieval strategies for specific customer domains and use cases.
Finetune or adapt foundation models with instruction tuning, hyperparameter tuning, sampling, and retrieval strategies for specific customer domains and use cases.
Integrate AI capabilities into the modern enterprise ecosystem: Implement MCPs and Agent-to-Agent communication protocols.
Implement observability and evaluation for AI systems: Tracing, quality metrics, evaluation datasets, feedback capture, and guardrails.
Implement observability and evaluation for AI systems: Tracing, quality metrics, evaluation datasets, feedback capture, and guardrails.
Implement observability and evaluation for AI systems: Tracing, quality metrics, evaluation datasets, feedback capture, and guardrails.
Implement observability and evaluation for AI systems: Tracing, quality metrics, evaluation datasets, feedback capture, and guardrails.
Implement observability and evaluation for AI systems: Tracing, quality metrics, evaluation datasets, feedback capture, and guardrails.
Inference optimization: Optimize latency, throughput, and costs for LLM‑backed solutions; Experiment with model sizes, caching, batching, and routing strategies.
Collaborate on AI solution design with Product Managers and Designers to shape innovative ideas, refine product requirements, and to design human-AI interactions.
Collaborate on AI solution design with Product Managers and Designers to shape innovative ideas, refine product requirements, and to design human-AI interactions.
Collaborate on AI solution design with Product Managers and Designers to shape innovative ideas, refine product requirements, and to design human-AI interactions.
Collaborate on AI solution design with Product Managers and Designers to shape innovative ideas, refine product requirements, and to design human-AI interactions.
Collaborate on AI solution design with Product Managers and Designers to shape innovative ideas, refine product requirements, and to design human-AI interactions.
Collaborate with Data Engineers / MLOps teams to operationalize AI solutions, automate training workflows, and manage production environments.
Stay current with cutting-edge developments in Generative AI, frontier models, orchestration frameworks, vector databases, and tools for LLMOps, bringing best practices to the AI SDLC lifecycle.

Who Fits Best?

You are a passionate engineer who enjoys solving complex, open‑ended problems with real-world constraints.
You enjoy building AI solutions from demos to production - owning safety, reliability, performance, and maintainability.
You thrive in a fast‑paced dynamic environment, comfortable with context‑switching across multiple ideas and projects.
You are curious about the end-to-end stack, from data pipelines and vector stores to API design and user interactions.
You care about engineering discipline: clean abstractions, test automation, observability, and security.
You embrace a mindset of constant learning and evolution as the AI landscape rapidly evolves and to share your learnings with our thriving community.

Our AI Engineering Stack

Core Software Engineering:
Core Software Engineering:
Core Software Engineering:
Core Software Engineering:
Core Software Engineering:
Strong in Python or TypeScript/Node.js; Solid understanding of data structures, algorithms, and foundations of distributed systems.
Strong in Python or TypeScript/Node.js; Solid understanding of data structures, algorithms, and foundations of distributed systems.
Strong in Python or TypeScript/Node.js; Solid understanding of data structures, algorithms, and foundations of distributed systems.
Strong in Python or TypeScript/Node.js; Solid understanding of data structures, algorithms, and foundations of distributed systems.
Strong in Python or TypeScript/Node.js; Solid understanding of data structures, algorithms, and foundations of distributed systems.
REST/gRPC APIs, microservices, message queues, and cloud-native architecture.
REST/gRPC APIs, microservices, message queues, and cloud-native architecture.
REST/gRPC APIs, microservices, message queues, and cloud-native architecture.
REST/gRPC APIs, microservices, message queues, and cloud-native architecture.
REST/gRPC APIs, microservices, message queues, and cloud-native architecture.
Generative AI, LLMs, Foundation Models:
Generative AI, LLMs, Foundation Models:
Generative AI, LLMs, Foundation Models:
Generative AI, LLMs, Foundation Models:
Generative AI, LLMs, Foundation Models:
OpenAI, Azure OpenAI Service, AWS Bedrock, Anthropic, Google Vertex AI or similar hyperscaler platforms. Open‑source models (Llama, Mistral) and serving stacks.
OpenAI, Azure OpenAI Service, AWS Bedrock, Anthropic, Google Vertex AI or similar hyperscaler platforms. Open‑source models (Llama, Mistral) and serving stacks.
OpenAI, Azure OpenAI Service, AWS Bedrock, Anthropic, Google Vertex AI or similar hyperscaler platforms. Open‑source models (Llama, Mistral) and serving stacks.
OpenAI, Azure OpenAI Service, AWS Bedrock, Anthropic, Google Vertex AI or similar hyperscaler platforms. Open‑source models (Llama, Mistral) and serving stacks.
OpenAI, Azure OpenAI Service, AWS Bedrock, Anthropic, Google Vertex AI or similar hyperscaler platforms. Open‑source models (Llama, Mistral) and serving stacks.
Prompt engineering, hyperparameter tuning, tool/function calling, and basic fine‑tuning.
Prompt engineering, hyperparameter tuning, tool/function calling, and basic fine‑tuning.
Prompt engineering, hyperparameter tuning, tool/function calling, and basic fine‑tuning.
Prompt engineering, hyperparameter tuning, tool/function calling, and basic fine‑tuning.
Prompt engineering, hyperparameter tuning, tool/function calling, and basic fine‑tuning.
AI Orchestration Frameworks & Libraries:
AI Orchestration Frameworks & Libraries:
AI Orchestration Frameworks & Libraries:
AI Orchestration Frameworks & Libraries:
AI Orchestration Frameworks & Libraries:
PyTorch, TensorFlow, Hugging Face Transformers.
PyTorch, TensorFlow, Hugging Face Transformers.
PyTorch, TensorFlow, Hugging Face Transformers.
PyTorch, TensorFlow, Hugging Face Transformers.
PyTorch, TensorFlow, Hugging Face Transformers.
LangChain, LlamaIndex, or similar orchestration frameworks.
LangChain, LlamaIndex, or similar orchestration frameworks.
LangChain, LlamaIndex, or similar orchestration frameworks.
LangChain, LlamaIndex, or similar orchestration frameworks.
LangChain, LlamaIndex, or similar orchestration frameworks.
Data Platforms:
Data Platforms:
Data Platforms:
Data Platforms:
Data Platforms:
Vector databases - Pinecone, Weaviate, Qdrant, pgvector/FAISS for semantic search and RAG.
Vector databases - Pinecone, Weaviate, Qdrant, pgvector/FAISS for semantic search and RAG.
Vector databases - Pinecone, Weaviate, Qdrant, pgvector/FAISS for semantic search and RAG.
Vector databases - Pinecone, Weaviate, Qdrant, pgvector/FAISS for semantic search and RAG.
Vector databases - Pinecone, Weaviate, Qdrant, pgvector/FAISS for semantic search and RAG.
SQL/NoSQL databases, enterprise data lakes/warehouses on Azure / GCP / AWS.
SQL/NoSQL databases, enterprise data lakes/warehouses on Azure / GCP / AWS.
SQL/NoSQL databases, enterprise data lakes/warehouses on Azure / GCP / AWS.
SQL/NoSQL databases, enterprise data lakes/warehouses on Azure / GCP / AWS.
SQL/NoSQL databases, enterprise data lakes/warehouses on Azure / GCP / AWS.
MLOps / LLMOps:
MLOps / LLMOps:
MLOps / LLMOps:
MLOps / LLMOps:
MLOps / LLMOps:
MLflow, Weights & Biases, LangSmith or similar tools for experiment tracking, observability, and evaluation.
MLflow, Weights & Biases, LangSmith or similar tools for experiment tracking, observability, and evaluation.
MLflow, Weights & Biases, LangSmith or similar tools for experiment tracking, observability, and evaluation.
MLflow, Weights & Biases, LangSmith or similar tools for experiment tracking, observability, and evaluation.
MLflow, Weights & Biases, LangSmith or similar tools for experiment tracking, observability, and evaluation.
CI/CD for ML/LLM pipelines, containerization (Docker), orchestration (Kubernetes).
CI/CD for ML/LLM pipelines, containerization (Docker), orchestration (Kubernetes).
CI/CD for ML/LLM pipelines, containerization (Docker), orchestration (Kubernetes).
CI/CD for ML/LLM pipelines, containerization (Docker), orchestration (Kubernetes).
CI/CD for ML/LLM pipelines, containerization (Docker), orchestration (Kubernetes).

Qualifications and Skills

Bachelor’s or Master’s degree in Computer Science, Information Technology, or allied streams.
Strong software engineering fundamentals plus experience in the AI Engineering stack (above). Strong skills in technical debugging and analysis.
5+ years of hands-on experience in software engineering; Experience working on Machine Learning, AI, data-intensive systems, or LLM-based features.
Proven experience building production-grade backend services or platforms.
Hands-on experience with at least one hyperscalar cloud platform (Azure, AWS, or GCP).
Familiarity with core ML concepts (training vs. inference, evaluation, overfitting, monitoring) and modern Generative AI application patterns (RAG, Prompt Engineering, Agents, MCP).
Experience working directly with global customers is a plus, especially in Ecommerce, Retail, CPG, or SaaS Product Engineering domains.

Benefits

Competitive Compensation and Perks
Accelerated Career Paths
Flexible Work Environment
Rewards and Recognition
Health Insurance Coverage
Sponsored Certifications
Culture of Innovation and Creativity
Mentorship by Industry Leaders
Global Brands, International Exposure
Cutting-edge Technologies

Location

Pune (India)
Flexi-timing
Hybrid options

Equal Opportunity Employer

Mindstix is committed to an inclusive and diverse work environment. We do not discriminate based on race, colour, ethnicity, ancestry, national origin, religion, gender, gender identity, gender expression, sexual orientation, age, disability, veteran status, marital status or any other legally protected status.

ABOUT US

Mindstix is an AI-native engineering studio for ambitious enterprises. We help brands reinvent commerce experiences, embed AI into products and operations, and modernize cloud-native platforms end to end - from strategy and architecture through UX, full‑stack development, data engineering, and DevOps.

AI Innovation. Engineered. Accelerated.

© 2026 Mindstix Software Labs.
All Rights Reserved.

ABOUT US

Mindstix is an AI-native engineering studio for ambitious enterprises. We help brands reinvent commerce experiences, embed AI into products and operations, and modernize cloud-native platforms end to end - from strategy and architecture through UX, full‑stack development, data engineering, and DevOps.

AI Innovation. Engineered. Accelerated.

© 2026 Mindstix Software Labs.
All Rights Reserved.

ABOUT US

Mindstix is an AI-native engineering studio for ambitious enterprises. We help brands reinvent commerce experiences, embed AI into products and operations, and modernize cloud-native platforms end to end - from strategy and architecture through UX, full‑stack development, data engineering, and DevOps.

AI Innovation. Engineered. Accelerated.

© 2026 Mindstix Software Labs.
All Rights Reserved.

ABOUT US

Mindstix is an AI-native engineering studio for ambitious enterprises. We help brands reinvent commerce experiences, embed AI into products and operations, and modernize cloud-native platforms end to end - from strategy and architecture through UX, full‑stack development, data engineering, and DevOps.

AI Innovation. Engineered. Accelerated.

© 2026 Mindstix Software Labs.
All Rights Reserved.