EXPERT COURSE

Prompt Engineering, Evaluation, & Refinement

Over 65% Companies Globally Use Generative AI for Business Functions

In 2024, more than 65% of companies globally already say they regularly use generative AI for business functions and many more plan to adopt it soon.

Yet as adoption soars, a frequent problem emerges: poorly written prompts lead to inconsistent, inaccurate, or low-value outputs; wasted time, wasted compute, and reputational risk. Research shows that users who apply structured, context-aware prompt engineering report significantly better efficiency and output quality than those who don’t.

Master the Skills to Obtain Professional-Grade Outputs

That’s where this course comes in.
In just 6 hours, you’ll master the exact skills needed to reliably steer Large Language Models (LLMs) toward professional-grade outputs.
Whether you’re a translator, content creator, localization engineer, or AI-driven data professional, a strong prompt engineering skill set dramatically increases accuracy, reduces errors (e.g. hallucinations), and ensures your investment in AI actually delivers value.
Prompt engineering is becoming an essential digital competence across industries all professionals should have.


Conditions: Please read our course and subscription plans terms and conditions carefully. With your registration, you confirm that you have read, understood and accepted our conditions and agree with them. 

If you have any questions, please visit the FAQ section (for courses or subscription plans) or get in touch with us.

Add to Calendar

Apple Google Office 365 Outlook Outlook.com Yahoo

  • This course includes:
  • Expert coach:
    Andrés Romero Arcas, Language Technology Expert at Acolad
  •  Interactive activities
  •  Life access to contents 
  •  In English
  •  Completion certificate
  •  Money back guarantee
  • Acquire A-Z knowledge about AI Translation
  • When: 17-19 February 2026 - 18.00 CET
  • Duration: 6 h approx.

Course and Instructor Description

This focused course trains participants to master the skill of prompt engineering, enabling them to reliably guide LLMs to produce desired output formats, and providing the frameworks necessary for rigorous evaluation and enhancement using external context (RAG) or targeted training (Fine-tuning).

What you will learn:

  • Core principles and structure of effective prompt engineering (system, user, assistant prompts; parameters; context use)
  • Advanced prompting strategies to ensure reliable and consistent output across tasks
  • How to iteratively refine prompts from first draft to optimized version
  • Methods for rigorous evaluation of LLM outputs: building test samples, using LLMs as judges, applying classification metrics, balancing risk, and integrating human-quality review
  • How and when to use external context (e.g. Retrieval-Augmented Generation, RAG) or fine-tuning to improve results, especially for specialized tasks or domain-specific content

Instructor: Andrés Romero Arcas
Andrés Romero Arcas is an AI and Language Technology Specialist with strong experience in NMT, LLMs, MTQE, and automated translation workflows. He focuses on practical, scalable AI solutions for localization teams and is known for explaining complex technology in a clear, actionable way.
Session 1: Prompting Foundations and Advanced Strategies
Session 2: Prompt Refinement Process: Iteration and Evaluation
Session 3: Evaluation & Refinement
Session 1

Prompting Foundations and Advanced Strategies

Session 1. Prompting Foundations and Advanced Strategies

1.1  Definition and Purpose
1.2 Core Principles for effective Prompt Engineering
1.3 Standard Prompt Structure
1.4 System, User & Assistant Prompts
1.5 Useful Parameters
1.6 Prompt Engineering: Advanced Strategies
Session 2

Prompt Refinement Process:
Iteration & Evaluation

Session 2. Prompt Refinement Process: Iteration and Evaluation

2.1 Prompt Creation Process
2.2 Prompt Refinement Process
2.3 What can we do to improve the results?
2.4 Prompt Evaluation: Building the right test sample
2.5 Prompt Evaluation Methods
SESSION 3

Evaluation & Refinement

Session 3. Evaluation & Refinement

3.1 Prompt Evaluation Methods (continuation)
3.2 LLM as a Judge
3.3 Metrics for Classification Models (MTQE)
3.4 Risk Balance
3.5 Human Quality Evaluation
3.6 Key Ideas & Recommendations
Meet

Andrés Romero

Andrés is a proficient language technology expert with over a decade of experience in the Localization Industry.
Throughout his career, he has held diverse 
roles, such as CAT Tool Specialist, Localization Engineer and Operations Technology Coordinator, where he led a team of localization engineers.
Currently at Acolad, Andrés focuses on machine translation evaluation and engine training. He is also deeply involved in prompt engineering and Generative AIproposing AI-driven driven solutions to deliver tailored, customer-centric solutions and to tackle challenges in Production.
Andrés is passionate about automating and optimizing processes to enhance productivity and efficiency, improving quality and integrating innovation into localization workflows.
Andrés Romero - Course Creator & Host
Created with