AzureAI-102

AI-102 Azure AI Engineer: Building Intelligent Solutions with Azure AI Services

AI-102 is Microsoft's AI Engineer Associate certification — it validates your ability to design and implement AI solutions using Azure AI Services, Azure OpenAI Service, and Azure Machine Learning. Unlike AI-900 (conceptual) or the ML Specialist level, AI-102 tests hands-on implementation: calling Azure AI APIs, configuring language models, building search solutions, and deploying intelligent agents. This is the certification for developers and data scientists who build production AI applications on Azure.

11 min
3 sections · 10 exam key points

Azure AI Services: Language, Vision, and Speech

AI-102 tests practical use of Azure AI Services. Azure AI Language: text analytics (sentiment analysis, key phrase extraction, named entity recognition with categories and subcategories, entity linking to Wikipedia), custom named entity recognition (train on your domain-specific entities), custom text classification (single-label and multi-label), question answering (custom QnA knowledge base from documents and FAQs — powers chatbot FAQ responses), conversational language understanding (CLU — intent recognition and entity extraction for chatbots). Azure AI Vision: image analysis (objects, tags, captions, colour analysis, brands, celebrities), OCR (read printed and handwritten text from images and PDFs — Read API), face detection and analysis (face attributes: age estimate, emotion, glasses, head pose — not identity recognition by default). Custom Vision: train image classification or object detection models on your own images using the Custom Vision portal or SDK — no ML expertise required, train with as few as 15 images per class. Azure AI Speech: speech-to-text (real-time and batch transcription, custom speech models for domain-specific vocabulary), text-to-speech (neural voices — natural sounding, custom neural voice for brand voice), speech translation (real-time translation across 60+ languages).

Azure OpenAI Service and Generative AI

Azure OpenAI Service provides access to OpenAI models (GPT-4, GPT-4o, DALL-E 3, Whisper, text-embedding-ada-002) within Azure's security boundary. Key concepts: deployments (you create a named deployment of a specific model version in your Azure OpenAI resource — use the deployment name in API calls), prompt engineering (system prompt sets the AI's persona and constraints, user messages drive the conversation, assistant messages are AI responses), temperature (0 = deterministic, 1 = creative — higher temperature = more variable responses), max tokens (controls response length). Chat completions API: POST /openai/deployments/{deployment-id}/chat/completions — messages array with role (system, user, assistant) and content. Embeddings: convert text to vector representations — compare semantic similarity using cosine distance, power vector search in AI Search. RAG (Retrieval-Augmented Generation): augment LLM responses with your own data — embed documents into a vector store, at query time retrieve relevant chunks, pass as context to the LLM — prevents hallucinations by grounding responses in your data. Azure AI Foundry (formerly Azure AI Studio): unified development platform for building and deploying generative AI applications.

Azure AI Search and Document Intelligence

Azure AI Search (formerly Cognitive Search): enterprise search service with AI enrichment pipeline. Indexers: automatically pull content from Azure SQL, Cosmos DB, Blob Storage — extract text, run AI skills, store in search index. Skillsets: AI enrichment pipeline (OCR skill extracts text from PDFs, entity recognition skill tags entities, key phrase skill extracts keywords, sentiment skill scores documents, image analysis skill describes images, custom web API skill calls external services). Search index: defines the schema for searchable fields — searchable (full-text search), filterable (exact match, range queries), facetable (aggregation for facet navigation), sortable, retrievable. Semantic ranker: Microsoft's ML model re-ranks search results for relevance beyond keyword matching — adds semantic captions (highlighted relevant passages) and answers (direct answer extraction). Vector search: store embeddings in the index, search by semantic similarity rather than keywords — hybrid search combines vector and keyword results. Document Intelligence (formerly Form Recogniser): extract structured data from documents — prebuilt models (invoice, receipt, ID document, business card, W2 form), custom models (train on your document types).

Key exam facts — AI-102

  • Azure OpenAI deployments: create a named deployment for a specific model version — use deployment name in API calls
  • RAG pattern: embed documents into vector store, retrieve relevant chunks, pass as context to LLM
  • Temperature: 0 = deterministic output; higher = more creative/variable
  • Azure AI Search skillsets: OCR, entity recognition, key phrase, sentiment, custom web API skills
  • Semantic ranker re-ranks results using ML beyond keyword matching — adds captions and direct answers
  • Vector search: semantic similarity over embeddings; hybrid search combines vector and keyword results
  • Custom Vision: train image classification or object detection with as few as 15 images per class
  • CLU (Conversational Language Understanding): intent recognition and entity extraction for chatbots
  • Document Intelligence prebuilt models: invoice, receipt, ID, W2 — custom models for your document types
  • Azure AI Foundry: unified platform for developing and deploying generative AI applications

Common exam traps

Azure OpenAI Service and the public OpenAI API are the same with different pricing

Azure OpenAI provides the same models but with Azure's enterprise security: private endpoints, no data used for OpenAI model training, Entra ID authentication, content filtering, regional data residency, and Microsoft's compliance boundary. The APIs are compatible but the governance model is fundamentally different.

RAG eliminates AI hallucinations completely

RAG significantly reduces hallucinations by grounding responses in retrieved context, but it does not eliminate them. The LLM can still misinterpret retrieved context, fail to find relevant documents, or confabulate details. Human review and confidence thresholds are still required for high-stakes applications.

Practice this topic

Test yourself on AI-102 Azure AI Engineer

JT Exams routes you to questions in your exact weak areas — automatically, after every session.

No credit card · Cancel anytime

Related certification topics