Assessing Technical Integrity in Our Consumer Vertical

22.07.24 by N. Mert Aydin

Assessing Technical Integrity in Our Consumer Vertical

Delivery Hero Logo

6 min read

Our consumer vertical at Delivery Hero consists of two big tribes: Consumer Discovery and Consumer Insights & Marketing Technologies. We are responsible for providing global services and high-quality data that serve our brands and other verticals to enhance and enrich their business flows. This includes areas like search, recommendation, customer attributes, marketing decisions and many more. Our domains and squads in our tribes consist of tech and product representatives like software engineers, machine learning engineers, data engineers, data scientists, engineering managers, product managers and product analysts. Our success is the result of strong and sincere teamwork with all of our stakeholders.

As technology continues to evolve, ensuring the integrity of technical components within our vertical becomes a challenge, just like any other technical team or even company. Our recent “Consumer Technical Integrity Assessment” aims to evaluate and enhance the integrity of technology building blocks, focusing on identifying areas for improvement and supporting better operational practices.

What is the Consumer Technical Integrity Assessment?

The Consumer Technical Integrity Assessment is an interview-driven technical check-up designed to pinpoint areas for improvement from an integrity perspective. This assessment addresses the need for continuous evaluation and status tracking within our tribes, domains, and squads, while also helping to clarify the balance between administrative tasks and delivering business value.

Aims of This Assessment

When we started our assessment, we had several aims in mind: 

  • Have a list of technical building blocks handy so that anyone can use it as a quick and up-to-date reference
  • Have a list of actual expected skills with respect to how these technical building blocks are positioned, as having contributors skilled with these technical building blocks could decrease the risk associated with a particular technical building block in a squad
  • Have an alternative perspective on which chapters/workstreams could be formed in our vertical (and even at Delivery Hero)
  • Come up with risk scores for technical building blocks and use them in mitigation plans for the corresponding squads as they prioritize their quarterly work
  • Form a basis for skills that might be required for the squads and use this as a call for training/certifications where applicable to enable contributor growth/up-skilling as well as team performance boost

Methodology

After the initial run, we documented our methodology as a process so that this approach could be driven by anyone in the vertical when we needed to repeat it.

Our methodology involves three main steps: Interview, verification and aggregation.

  • Interview: This step involves scheduling and conducting 30-45 minute interviews with technical representatives from each squad to go over the latest system architecture designs. We evaluate each technical building block with respect to multiple dimensions (explained in the next section).
  • Verification: This involves fine tuning the entries we collect during the interview. This step is equally as important as the previous step, and we go over the results with the technical representative to ensure that there are no obvious problems or bad assumptions we made previously.
  • Aggregation: This step involves obtaining risk scores for each technical building block per squad and creating an average risk score per technical building block within the tribe. We also analyse the risk score distribution for the same technical building block among different squads. Eventually, we can brainstorm on why or whether we have different risk scores for the same technical building block per squad or at the tribe level to learn and share experiences.

Evaluation Dimensions

As mentioned in the previous section, our assessment takes into consideration a variety of dimensions when evaluating technical building blocks. We clustered our evaluation dimensions based on their contribution to the overall risk score:

  • Foundational:
  • Architectural:
    • Order of appearance
    • Dependency effect
  • Operational:
    • Outsourced administration
    • Incidents

Foundational dimensions involve assessments as stated by the technical representative, revealing how their team is interpreted by someone within the team. Confidence involves the team’s confidence in a particular tech building block (“not confident”, “somewhat confident”, or “very confident”), while skill set involves how many team members are skilled in that particular tech building block.

Architectural dimensions involve assessments that are decisions, mostly related to the system architectures and their evolution over time. The order of appearance tries to assess the significance of a particular tech building block for the place it appears in the happy path (“closest to the request, “just before the request is first handled”, “primary interlocutor”, “side dish”). The dependency effect evaluates the impact if a particular tech building block goes down or disappears (“everything is down”, “working but slow / with outdated data”, “no effect”).

Finally, operational dimensions involve assessments based on more factual, data-driven aspects. Outsourced administration assesses whether the tech building block is a managed service or self-hosted. Incidents are historical counts involving a particular tech building block. Please note that the incident counts are not necessarily representative of a particular tech building block as the primary cause of the incident; as long as the tech building block is mentioned in the incident report (particularly in the root cause analysis), we include them. There is an incident perspective for the fresh period (previous quarter) as well as a longer period (the whole year).

Evaluation dimensions

For risk scoring, we do not always use the above-mentioned dimensions directly. For some of them, ratios are used. And to distinguish, we refer to them as impacts. Each impact is assigned a weight and here is something we used for the most recent round:

  • Managed Service Impact: 15%
  • Confidence Impact: 15%
  • Order Impact: 10%
  • Dependency Effect Impact: 15%
  • Incident Ratio for Q1 2024: 18%
  • Incident Ratio for 2024: 12%
  • Skilled-to-Total Contributor Ratio: 15%

These weights are used to calculate risk scores for tech building blocks, which help highlight areas needing improvement.

In order to make it easier to understand the output of an interview, here is a mock example of what it looks like for each technical building block (e.g. Kubernetes, PostgreSQL) for a squad (e.g. Squad ABC) of having 9 contributors:

Squad example

Interpreting the Results

After we have everything in hand, we come up with two aggregations: Average risk score per technical building block within a tribe and technical building block risk score distribution among the tribe squads to carve out an action plan as well as recommendations for improvement to mitigate the risk.

Average risk score per technical building block within a tribe: If we need to pick two technical building blocks, it would be Tech I and Tech H based on the aggregated results (mock).
Technical building block risk score distribution among the tribe squads: This view is particularly good for brainstorming how the same technical building block is observed in various squads and whether one particular squad has a lot of technical building blocks to manage.

We share the recommendations with the tribe leads for their input and prioritization. Based on the plan and prioritization, we can always re-run the assessment to observe improvements in risk scores.

The recommendations involve various aspects: from trying out or switching to managed services, aligning with our cloud-native strategy, and reducing the operational burden on squads. Managed services offer scalable, reliable solutions that allow teams to focus on delivering business value. We also organize workshops and knowledge-sharing sessions to increase confidence in key technical building blocks while improving the skill set of the contributors.

Conclusion and Next Steps

Based on internal feedback, the assessment provides valuable insights into the current state of our technical components and highlights areas for improvement in our architectures and skills. By implementing the recommended actions, we can ensure better integrity, ultimately supporting better business outcomes and operational efficiency. Additionally, reviewing these technical building blocks with representatives from the squads helps us understand what challenges we have in our designs and our operational burden throughout execution.

As soon as we concluded our study, we included the output to our engineering productivity dashboard (where we track some engineering metrics including DORA and more) to proactively monitor and see our progress as we apply the output of the upcoming recurrences:

Dashboard widget

If you like what you’ve read and you’re someone who wants to work on open, interesting projects in a caring environment, check out our full list of open roles here – from Backend to Frontend and everything in between. We’d love to have you on board for an amazing journey ahead.

Assessing Technical Integrity in Our Consumer Vertical
N. Mert Aydin
Principal Software Engineer
EmployeeGPT: Leveraging AI for Internal Knowledge Discovery

Next

Engineering

EmployeeGPT: Leveraging AI for Internal Knowledge Discovery

Delivery Hero Logo
6 min read