Performance Testing Workshop
Mastering performance testing to ensure scalability, reliability, and real business impact.
This hands-on workshop introduces participants to the fundamentals of performance testing, focusing on techniques that align with real business metrics rather than just system metrics. Participants will learn to build and run performance tests that provide meaningful insights, manage performance testing in pipelines, and create robust, reproducible test environments. The workshop empowers teams to make data-driven improvements that enhance system reliability and meet customer-centric KPIs.
Who Should Attend?
EEngineers, QA testers, DevOps professionals, and technical leads interested in integrating performance testing into their development process. Ideal for those looking to gain practical skills in setting up and running performance tests that provide actionable business insights.
Key Benefits of Attending
- Real-World Focus: Gain practical skills in performance testing that align with business goals.
- Reproducible Testing: Learn techniques to set up and document performance tests for repeatability and accuracy.
- Continuous Testing Integration: Understand how to integrate performance testing into CI/CD pipelines for fast feedback.
- Enhanced System Reliability: Improve system reliability by testing and optimising performance under realistic conditions.

Course Delivery Format
Duration: 1-day workshop, 2-day course
Format: In-person, online, or hybrid
Interactive Elements: Includes hands-on exercises, group discussions, and real-world examples
Course Modules & Learning Outcomes
1. Snapshot Current Context


Learn to document and commit the current platform setup to source control, making test setups explicit, reproducible, and aligned with the desired scale
Learning Outcomes:
- Understand the importance of a known, reproducible setup for accurate performance testing.
- Learn to commit the test context and platform setup to source control, ensuring consistency.
- Practise snapshotting the test environment for reproducible and reliable results.
2. Defining Business-Level Metrics


Shift focus from traditional system metrics to metrics that reflect real business impact, such as customer-centric KPIs.
Learning Outcomes:
- Identify business-focused metrics (e.g., donations per second rather than HTTP response time) that align with organisational goals.
- Understand the importance of testing for the behaviours these metrics support.
- Practise designing tests that provide insights into business impact rather than just technical performance.
3. Creating a Mocking Platform


Develop skills to create a mock platform for systems that cannot be load tested directly, enabling comprehensive performance testing without reliance on live third-party systems.
Learning Outcomes:
- Learn to mock external dependencies, such as third-party payment providers, to enable isolated testing.
- Practise setting up mocks that can simulate various failure scenarios and latency issues.
- Understand the value of mocking and stubbing in creating resilient, scalable test platforms.
4. Performance Testing in the Pipeline


Integrate performance testing into CI/CD pipelines to allow continuous performance monitoring, alerting, and automated testing.
Learning Outcomes:
- Understand the importance of continuous performance testing and monitoring in the development pipeline.
- Learn to run basic performance tests on every commit, with larger tests on a scheduled basis.
- Practise setting up alerts for performance thresholds, ensuring fast feedback on performance impacts.
5. Approach to Running a Performance Test


Learn a structured approach to setting up, running, and recording performance tests, from simple initial tests to more complex scenarios.
Learning Outcomes:
- Practise defining success criteria for each performance test based on target KPIs.
Develop skills in setting up, running, and recording test results systematically. - Learn the importance of resetting the platform after each test to avoid test contamination.
6. Receiving Feedback


Understand the value of feedback as data and learn techniques for receiving it with an open mind.
Learning Outcomes:
- Develop a habit of saying “thank you” when receiving feedback, regardless of content.
- Avoid defensiveness and use feedback as data to guide personal and professional growth.
- Practise keeping a feedback log to reflect on feedback and track personal development.
Register for the Course
Interested in joining? Get in touch to learn more about dates, availability, and pricing.
Register To Learn More About Our Performance Testing Workshop
Other Courses and Workshops
Book a call with us
How is Armakuni different?
We show people what good looks like (because we have experienced it, many times). We put metrics on the landscape to help understand where we need to focus and demonstrate the change. We enable your people to deliver the change, through coaching and pair programming. Success for us is stepping back out of a modern cloud native engineering/technology/digital function.
What is Armakuni Insights?
Many of our clients do not have the insights to help them really see what is going on within technology. A technology practice within an organisation is a function of its people, teams, organisational structures, leadership, technical direction and strategic direction. No 2 organisations are the same, and their ability to perform is dependant on so many intangible factors. We’ve created a series of exercises based on industry best practices to help you better articulate the true state of your team. By using a combination of quantitative, data-driven metrics, as well as qualitative insights, we provide your teams with a sense of their strengths and areas for improvement.
What is Armakuni Way?
The AK way is a collection of approaches for “delivery with engineering agility” that we have used for many years as a baseline when we are engaging with clients. It’s not meant to be a fixed approach model, nor is it the only way we work - as we all know, operating in the world of software is about adaptation and pragmatism - but these approaches have served us well across a range of industries, projects and engagements, and are constantly evolving. If the client/team/dept we are working with doesn’t have a practice in place for any of these practices, then we have something to fall back on.
How will you work with us?
Most of our engagements with clients are about helping them change how they deliver technology, whether helping with the adoption of scalable microservices, or building self serve infrastructure platforms. However, in the most part we are helping our clients adopt the mindset, practices and approaches that will enable this approach beyond our time onsite - modern, cloud native engineering practices. Alongside this, we enable the change of the technology function as a whole (structurally) and how the technology function interacts with the rest of the business - whether that’s with business functions, governance, audit/security and others. Below is a “typical” engagement model, but our approaches and modular/productised and so we often just do one part of this. Step 1 - Understand the landscape/topography In order to work with a client, we need to understand what is going on in their organisation, with an external focus/viewpoint. Step 2 - Start to plan the roadmap Once we have a view on where the organisation is at, we start to work with leadership on where they are trying to get to, aligning to the organisation strategy and/or the engineering/technology strategy, and build out a roadmap. Step 3 - Educate In an ideal world, the entirety of the (technology) organisation understands what we are trying to achieve. Typically we find that there is a lot of “unconscious incompetence” - ie people don’t know what the don’t know - so we run sessions/workshops to demonstrate hands on what good looks like. This aligns the whole organisation to the approaches and mindset that we are trying to distil into the teams - and “should” create a sense of desire around that end goal. Step 4 - Start to drive change This can take many forms, but all are based around coaching individuals or teams through the change. Example: An Engineering Accelerator: we embed a pair of experienced practitioners into a team to coach and pair (rotating round your team) for 3 months, shaping that team into a modern cloud native engineering mindset and ensuring that the practices are embedded and desired by the team, not a chore. Step 5 - Observe, Orientate, Decide, Act Constant observation, sensing of what is working, what is not, learning and adapting as we go.
How do you measure business outcomes?
At a technology/implementation level, Armakuni uses the Dora research and metrics, based on over 12 years of surveys, research and data collation, to benchmark and guide technology performance metrics to business outcomes. We combine this with our Engineering Insights which helps us look at broader environmental metrics - capabilities, approaches, psychological safety, cognitive load - all significant factors in a function's ability to respond to business/organisational needs. At a more holistic/contextual level, we identify the symptoms/issues that the broader business is experiencing and tie these back to the metrics we gather so we can demonstrate change over time.