How to Avoid Technical Debt

Bill Hussain

Software Engineer

Technical debt is the cost of reworking a solution. Its source can be complex and have multiple root causes. In general, technical debt stems from tradeoffs for short-term gains. These gains are usually viewed through how they satisfy stakeholder, business, and/or solution requirements. Under an Agile mindset, requirements are reprioritized often, and in shorter sprints. Thus, delivering short-term gains is common.

Software products and services use Test-Driven-Development (TDD), code reviews, and code refactoring to control tech-debt during solution implementation. The norm is to control for the side effects of technical debt, rather than avoid it. This amounts to teams having to rework the solution and causing the accumulation of debt for short-term gains.

Which is why we asked ourselves: How can product practitioners and software engineers avoid reworking software implementations? 

We propose bundling software-quality concerns before a sprint. Our thesis: mindfully targeting software-quality bundles reduces technical debt by avoiding software-quality trade-offs.

Our thesis doesn’t address the technical debt accrued from requirement gaps via iterative requirements engineering, or the inherent lack of understanding suffered in early sprints. Instead, it focuses on smarter accounting of debt.

In this article, we attempt to separate two concerns.

  • Quality Priority
  • Quality Trade-off

Software-Quality

Non-functional requirements define the scope of a product. They don’t constructively describe the solution, they describe the conditions the solution is expected to perform within.

Quality attributes are the majority of nonfunctional requirements. Other classes of nonfunctional requirements include constraints and external interface requirements. Quality attributes are those that help products delight users. They expand upon over project iterations to become the source of many functional requirements. Quality attributes are aspects all solutions have.

Quality attributes have hierarchies, where related attributes can be grouped into major categories. Different system components need to emphasize different quality attributes. As such, all possible attributes don’t have to be accounted for. For example, not all applications prioritize accessibility with respect to those who may have impaired vision. Similarly, not all applications have emotional requirements to invoke a feeling in the user, like with gambling and gaming platforms.

A quality attribute is a measurable and testable property. If the property wasn’t testable, we wouldn’t be aware of quality losses, gains, or retention.

Different product categories will have different quality concerns. Ideally, one would maximize the value for all attributes. But reality teaches us that requirements themselves can be the source of conflict, and the same is true for quality attributes. So we must narrow our focus to only product-essential attributes.

Trade-offs

If we avoid technical debt with respect to quality attributes, we avoid trade-offs between conflicting non-functional product requirements. These requirements may be coming from different stakeholders, they may be enforced simultaneously or in different sprints.

Luckily for us, there exists quality attribute trade-off matrices and prescribed subsets of quality attributes for software products. These trade-offs were empirically derived, so while they may apply in general, some trade-offs might not apply in certain cases (e.g. more specific software product categories).

Example: Software Products

a matrix comparing the quality being focused on to the side effects

We interpret the above matrix as follows.

  • The attributes listed for rows are the quality being promoted / focused on by the practitioner.
  • The quality attributes listed for columns are the side effects realized by the practitioners when focusing on an attribute listed in a row entry.
  • A + implies a positive side effect.
  • A – implies a negative side effect.
  • All matrix diagonal entries can be assumed to have full effect.

Priorities

Quality requirements need to be measurable. The requirement is an agreement between stakeholders on expectations. Achievement of that agreement is determined, demonstrated, and justified with a quantifiable goal. Stakeholders will have their own individual quality preferences, which is a potential source of conflict.

Certain quality attribute combination trade-offs will be unavoidable when satisfying a product’s requirements. For these scenarios, a decision needs to be made on which quality is more important to the project—attributes need to be prioritized.

Quality attribute prioritization helps us deal with these issues in two ways. By prioritizing attributes, requirements elicitation becomes aligned with project success. Additionally, priorities provide immediate and clear responses for managing quality conflicts.

A balance can be struck by:

  1. prioritizing quality attributes essential to the product
  2. using a trade-off matrix to avoid commiting to conflicting goals (by keeping the lower priority requirement on the backlog)

Technical Debt

Recall, technical debt is the cost of reworking a solution. Technical staff are the main stakeholders for the solution of technical debt. In general, technical debt stems from trade-offs for short-term gains. This is because Agile methods are change driven, not plan driven.

In Agile, it is impossible to predict future ticket priorities or know when tickets will be addressed. The norm is for priorities to be changed and tickets/stories to be rewritten. As such, long-term goals for solving technical debt are a fool’s errand.

As I mentioned earlier, measurable quality requirements enable stakeholder agreement on expectations. There is no point in specifying requirements with a fuzzy goal, as expectations become vague or ambiguous. This is compounded by a lack of quantification.

Fuzzy goals and difficult measurability are the main pain points for addressing technical debt. With some exceptions, most quality concerns require dedicated sprints simply to be measurable (e.g. research and discovery, developing benchmarks, or an entirely separate environment or infrastructure).

While we may apply the quality attribute trade-off matrix to technical debt, prioritizing technical debt is a bizarre proposition. The debt could be prioritized consistently with the quality concerns of normal project tickets, but this would increase the technical debt as it would amplify the trade-offs. Alternatively, the inverse prioritization could be used, but this wouldn’t negate the technical debt; it would result in a different technical debt, as the trade-offs are not a symmetrical matrix.

Moreover, what does zero technical debt in a change-driven project even mean? Additional quality concerns can always emerge or be discovered later in the project. Zero technical debt tickets for today’s code base may actually be ten next month on the same subset of code due to learnings and better developer context.

How can we prioritize our technical debt without adding more on to it? We know we can use a trade-off matrix to avoid conflicting debt but this is only applicable for pairwise-comparisons.

What if we have four quality concerns that all uniquely conflict with two others? We don’t know which subset we should allow into the sprint. Arbitrarily addressing conflict-free quality concerns may not add to today’s technical debt, but it can cause problems two to three sprints later. The trade-off matrix alone only allows us to locally avoid the current sprints debt.


Changing Our Frame of Reference

We want to avoid trade-offs where possible, else minimize them. We’re not going to rely on prioritizing our quality attributes; they could change sprint to sprint via backlog management. We aren’t focusing on technical debt accrued from requirement gaps. Changing priorities can cause this accrual. Instead, we’re going to assume all quality attributes are equally important and must be pursued.

We minimize trade-offs by re-orienting how we view our side effects. If we had a numeric form of our trade-off matrix, then we could use Principal Components Analysis (PCA) to determine what mixture of side effects has the most impact. PCA effectively returns a new axis (a.k.a. basis vectors) to view the same data. The new axis is special in that each vector of the axis is orthogonal to every other axis vector.

PCA returns an axis that minimizes the covariance between quality-attribute side effects. Our interpretation of the principal components is meaningless as we would not know what quality we were promoting or focusing on. We would only know the magnitude of its impact. But, the interpretation of component entries as side effects would remain intact. Thus, we could simply shift our focus to the most impactful side effects per component.

To summarize:

  • each principal component minimizes conflicts with other principal components
  • scalar entries within a principal component provide the mixture of side effects to align against and bundle together

Reinterpreting the Quality-Attribute Trade-offs

We need to re-interpret our trade-off matrix into a numeric form in order to apply PCA. The row and columns will be interpreted as before. However, the matrix’s diagonal, empty entries, + entries, and – entries need numerical forms.

We’re going to assume side effects to be percentages. Diagonal entries can be expressed as 100% (or 1.0) effective in their intention. We assume empty entries to be 0% (or 0.0).

The numeric values for + and – don’t have to be equivalent in magnitude. We’ll pick 2 possible absolute values: 1% (or 0.01) and 5% (or 0.05). These values are respectively considered psychologically or scientifically significant when conducting statistical tests.

We are now ready to apply PCA.

Calculating Principal Components

We aren’t going to focus on how to calculate PCA. We’re simply going to use it from a software package and use its results.

For those interested, accompanying Python code is provided for the calculation.

Setup

First, we do some setup to simplify our effort.

import numpy as np
import pandas as pd
import matplotlib as plt
from sklearn.decomposition import PCA
entryLabels = ["Availability",
               "Efficiency",
               "Installability",
               "Integrity",
               "Interoperability",
               "Modifiability",
               "Performance",
               "Portability",
               "Reliability",
               "Reuseability",
               "Robustness",
               "Safety",
               "Scalability",
               "Security",
               "Useability",
               "Verifiability"]

identityMatrix = np.identity(16)

positiveMatrix = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0],#Availability
                           [1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0],#Efficiency
                           [1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0],#Installability
                           [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0],#Integrity
                           [1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0],#Interoperability
                           [1, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 1],#Modifiability
                           [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],#Performance
                           [0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1],#Portability
                           [1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 1],#Reliability
                           [0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1],#Reuseability
                           [1, 0, 1, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 0],#Robustness
                           [0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 0],#Safety
                           [1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0],#Scalability
                           [1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0],#Security
                           [0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0, 0],#Useability
                           [1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 1, 1, 0, 1, 1, 0] #Verifiability
                          ])

negativeMatrix = np.array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],#Availability
                           [0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0],#Efficiency
                           [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],#Installability
                           [0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 1, 1],#Integrity
                           [0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0],#Interoperability
                           [0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],#Modifiability
                           [0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0],#Performance
                           [0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 0],#Portability
                           [0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],#Reliability
                           [0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0],#Reuseability
                           [0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],#Robustness
                           [0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1],#Safety
                           [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],#Scalability
                           [0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1],#Security
                           [0, 1, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 1],#Useability
                           [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] #Verifiability
                          ])

Generate Trade-Off Matricies

Next, we generate our different numerical-interpretations of our trade-off matrix. We have 4 cases to consider due to our numerical interpretation of + and -.

  • Using 0.01 and -0.01 for side-effects. We’ll refer to this as our low-stakes scenario.
  • Using 0.05 and -0.05 for side-effects. We’ll refer to this as our high-stakes scenario.
  • Using 0.01 and -0.05 for side-effects. We’ll refer to this as our unfavourable-stakes scenario
  • Using 0.05 and -0.01 for side-effects. We’ll refer to this as our favourable-stakes scenario.
# High Stakes: {+, -} = {0.05, -0.05}
highStakesTradeOffMatrix = identityMatrix + (0.05 * positiveMatrix) + (-0.05 * negativeMatrix)

# Low Stakes: {+, -} = {0.01, -0.01}
lowStakesTradeOffMatrix = identityMatrix + (0.01 * positiveMatrix) + (-0.01 * negativeMatrix)

# Favourable Stakes: {+, -} = {0.05, -0.01}
favourableStakesTradeOffMatrix = identityMatrix + (0.05 * positiveMatrix) + (-0.01 * negativeMatrix)

# Unfavourable Stakes: {+, -} = {0.01, -0.05}
unfavourableStakesTradeOffMatrix = identityMatrix + (0.01 * positiveMatrix) + (-0.05 * negativeMatrix)

Applying PCA

pca1 = PCA(n_components=len(entryLabels), svd_solver='full')
pca2 = PCA(n_components=len(entryLabels), svd_solver='full')
pca3 = PCA(n_components=len(entryLabels), svd_solver='full')
pca4 = PCA(n_components=len(entryLabels), svd_solver='full')

pca1.fit(highStakesTradeOffMatrix)
pca2.fit(lowStakesTradeOffMatrix)
pca3.fit(favourableStakesTradeOffMatrix)
pca4.fit(unfavourableStakesTradeOffMatrix)

PCA(copy=True, iterated_power=’auto’, n_components=16, random_state=None,
    svd_solver=’full’, tol=0.0, whiten=False)

Results

Recall that PCA minimizes the covariance between components. This means the variance within a component is more reliable for explaining away data. As a result, components with larger magnitudes explain more variance, and are better at summarizing the input data.

Analyzing Component Variance

We can approximate our trade-off matrix by choosing components responsible for the most variance. The main use for PCA is to generalize the treatment of data by allowing smaller components to be ignored. This allows the input to be reduced in its dimensionality. We aren’t going to use this property, but it’s good to know in case we want to leverage it in a sprint (i.e. narrow focus onto fewer concerns).

tradeOffMatrixPCAVarianceRatio = pd.DataFrame({"highStakes" : pca1.explained_variance_ratio_,
                                               "lowStakes" : pca2.explained_variance_ratio_,
                                               "favourableStakes" : pca3.explained_variance_ratio_,
                                               "unfavourableStakes" : pca4.explained_variance_ratio_
                                               }, index=None)
tradeOffMatrixPCAVarianceRatio.plot(kind="line", figsize=(8,6), rot=90, title="Principal Component Variance Ratios")

Key points to draw from principal component variance.

  • It’s easier to approximate all quality attributes with PCA when the stakes are high and side effects are more prominent.
  • We can approximate them by focusing on the principal components responsible for more attribute impact. The Pareto Principle could be used to select a subset of the components to focus on.
  • It’s easier to minimize quality-attribute conflicts with PCA.
  • The principal components are orthogonal to one another, meaning they are strictly independent of one another.
  • The input can be reduced in dimension from 16 quality attributes to 15 components. But, there are 16 components.
  • Principal components that vary the least have the least impact and can thus be discarded with little to no consequences.

Inspecting High-Stake Components

Each principal component is the most conflict-free mixture of side effects. By inspecting the four largest Principal components for the High Stakes scenario, we’re able to observe that some quality-attributes are overwhelmingly positive, while others overwhelmingly negative.

tradeOffMatrixPCAPrincipalComponent = pd.DataFrame({"highStakes": pca1.components_[0],
                                               "lowStakes": pca2.components_[0],
                                               "favourableStakes": pca3.components_[0],
                                               "unfavourableStakes": pca4.components_[0]
                                              }, index=entryLabels)

tradeOffMatrixPCASecondaryComponent = pd.DataFrame({"highStakes": pca1.components_[1],
                                               "lowStakes": pca2.components_[1],
                                               "favourableStakes": pca3.components_[1],
                                               "unfavourableStakes": pca4.components_[1]
                                              }, index=entryLabels)
tradeOffMatrixPCATertiaryComponent = pd.DataFrame({"highStakes": pca1.components_[2],
                                               "lowStakes": pca2.components_[2],
                                               "favourableStakes": pca3.components_[2],
                                               "unfavourableStakes": pca4.components_[2]
                                              }, index=entryLabels)
tradeOffMatrixPCAQuaternaryComponent = pd.DataFrame({"highStakes": pca1.components_[3],
                                               "lowStakes": pca2.components_[3],
                                               "favourableStakes": pca3.components_[3],
                                               "unfavourableStakes": pca4.components_[3]
                                              }, index=entryLabels)
tradeOffMatrixPCAPrincipalComponent.highStakes.sort_values(ascending=False).plot(kind="bar", rot=90)
tradeOffMatrixPCASecondaryComponent.highStakes.sort_values(ascending=False).plot(kind="bar", rot=90)
tradeOffMatrixPCATertiaryComponent.highStakes.sort_values(ascending=False).plot(kind="bar", rot=90)
tradeOffMatrixPCAQuaternaryComponent.highStakes.sort_values(ascending=False).plot(kind="bar", rot=90)

When choosing quality attributes to focus on via a component, the idea is to focus on those qualities that are positive in their side effect, while sacrificing those that are already negative.

Sacrificing quality could mean either one of two things. It could mean not paying attention to the quality attribute during the sprint. It could also mean allowing technical debt to be sunk into a preferred quality attribute of the software.Sacrificing quality does not mean permission to do harm to the product, with respect to that particular quality attribute.

Bundling Quality Attributes

We’re now finally ready to calculate our software quality attribute bundles. We assumed attempting to pursue all software quality attributes and we assumed numeric forms for the trade-off matrix, we applied PCA to minimize conflicts between concerns, and now, we form quality attribute bundles based on these components by applying the Pareto principle and only focusing on the most impactful side effects.

We do a top-2 postive selection and top-2 negative selection to form quality attribute bundles. We present a finished table of these bundles below. We repeat this for all scenarios.

Note: Principal Component 1 is the largest, while Principal Component 16 is the smallest. Additionally, some components have either exclusively negative or exclusively positive entries. Thus the smallest component with the least variance/magnitude/impact may be strictly negative or strictly positive.

High Stakes Quality Attribute Bundles

Principal ComponentPrimary FocusSecondary FocusSecondary SacrificePrimary Sacrifice 
1EfficiencyPerformanceRobustnessInteroperability
2ReusabilityPortabilityIntegritySecurity
3UseabilityInstallabilityIntegrityPortability
4ModifiabilityVerifiabilityInstallabilityInteroperability
5ScalabilityAvailabilitySafetyInstallability
6ScalabilityUseabilityInteroperabilitySecurity
7SafetyUseabilityVerifiabilityInstallability
8PerformanceReliabilityReusabilityInstallability
9RobustnessVerifiabilityPortabilityReliability
10AvailabilitySafetySecurityVerifiability
11EfficiencyVerifiabilityReliabilityPerformance
12SafetyReliabilityReusabilityIntegrity
13ModifiabilityInstallabilityReliabilityReusability
14SecurityUseabilityReliabilityRobustness
15RobustnessPortabilityInteroperabilityScalability
16IntegrityPerformance

Low Stakes Quality Attribute Bundles

Principal ComponentPrimary FocusSecondary FocusSecondary SacrificePrimary Sacrifice 
1EfficiencyPerformanceRobustnessInteroperability
2ReusabilityPortabilityIntegritySecurity
3UseabilityInstallabilityIntegrityPortability
4ModifiabilityVerifiabilityInstallabilityInteroperability
5ScalabilityAvailabilitySafetyInstallability
6UseabilityScalabilityAvailabilitySecurity
7SafetyInteroperabilityPortabilityInstallability
8PerformanceReliabilityReusabilityInstallability
9RobustnessAvailabilityInteroperabilityReliability
10AvailabilitySafetyRobustnessVerifiability
11EfficiencyVerifiabilityReliabilityPerformance
12SafetyReliabilityReuseabilityIntegrity
13ModifiabilityInteroperabilityReusabilityReliability
14SecurityUseabilityReliabilityRobustness
15PortabilityRobustnessInteroperabilityScalability
16PerformanceIntegrity

Favourable Stakes Quality Attribute Bundles

Principal ComponentPrimary FocusSecondary FocusSecondary SacrificePrimary Sacrifice 
1EfficiencyPerformanceRobustnessSecurity
2EfficiencyPerformanceVerifiabilityReusability
3InteroperabilityPortabilityReliabilityModifiability
4IntegrityModifiabilityRobustnessInteroperability
5InstallabilityUseabilityScalabilityAvailability
6UseabilityScalabilityInstallabilitySecurity
7InstallabilityScalabilityInteroperabilityPerformance
8ReliabilityPortabilitySafetyReusability
9VerifiabilityAvailabilityInstallabilityModifiability
10PerformanceRobustnessUseabilityEfficiency
11SecurityEfficiencyIntegrityAvailability
12SafetyVerifiabilityIntegrityReusability
13InteroperabilityIntegritySecurityPortability
14ReliabilityReusabilitySecurityModifiability
15RobustnessEfficiencyInteroperabilityScalability
16IntegrityPerformance

Unfavourable Stakes Quality Attribute Bundles

Principal ComponentPrimary FocusSecondary FocusSecondary SacrificePrimary Sacrifice 
1PerformanceEfficiencyRobustnessInteroperability
2IntegritySecurityReusabilityVerifiability
3PortabilityIntegritySecurityUseability
4InstallabilitySafetyEfficiencyModifiability
5ReusabilityModifiabilityInteroperabilityScalability
6InteroperabilitySecurityIntegrityVerifiability
7ScalabilityUseabilityVerifiabilitySecurity
8AvailabilityPerformanceInstallabilityVerifiability
9ReliabilityInteroperabilityAvailabilityRobustness
10AvailabilityInstallabilityScalabilityPerformance
11ScalabilityAvailabilityEfficiencyInteroperability
12ReuseabilityIntegrityScalabilitySafety
13ModifiabilityPerformanceSafetyEfficiency
14RobustnessReliabilityVerifiabilitySecurity
15PortabilityEfficiencyInteroperabilityIntegrity
16PerformanceIntegrity

Bundle Usage

We have calculated our quality attribute bundles for all scenarios. All that’s left is to apply them. For our purposes, we want to use these bundles to avoid technical debt. This is achieved by attempting to focus on all quality attributes, reframe the side-effects using Principal Components, and approximate those Components with the Pareto-Principal.

The process being used doesn’t matter (e.g. Waterfall, Scrum, Kanban, Agile, etc.). Practitioners are still creating tickets for handling technical debt via refactoring. TDD is still being used.

The only change lies in how tickets for technical debt are picked up and addressed. Ideally, you’d pick up technical debt tickets based on primary and secondary focus. When refactoring and during code reviews, practitioners aim to ignore or overload sacrificing attributes for the sake of the code refactor.

Practitioners should start with the most impactful bundles first (principal component 1) and work their way down to less impactful bundles (principal component 16). Note, the side effects for the principal components tend to be between 0.5 and -0.5, but we defined 1 to be realizing the full effect of a quality attribute (the diagonal of our trade-off matrix’s numerical form). Thus, practitioners first pay forward a surplus in quality for many quality attributes, which builds goodwill. This goodwill on the majority of attributes is then used up to address less impactful bundles and the minority of attributes that have yet to be addressed.

Once the last bundle is addressed, practitioners can simply start the cycle anew, or they can assess if the stakes have changed and switch to a more appropriate scenario’s bundles.

Conclusion

We’ve demonstrated a method to minimize conflicts in quality and avoid technical debt stemming from quality trade-offs. Although many assumptions have been made, the process is more important than the result.

Learnings: Taking a Step Back

This article has had a theme of employing mental models underlying it. The mental models employed were the following:

  • Pareto Principle
  • Thought Experiment
  • Probabilistic thinking
  • Relativity
  • Leverage
  • Distributions
  • Algebraic Equivalence

This article is the thought experiment. We leveraged empirically known quality attribute trade-offs. By utilizing PCA, we used relativity on our trade-offs. Similarly, PCA is used to better fit multivariate data distributions, hence probabilistic thinking and distributions. Similarly, we show an algebraic equivalence of our trade-offs and the PCA interpretation of the same trade-offs. Finally we approximated bundles of concerns with the Pareto Principle.


Acknowledgements

This article defined its problem space with respect to Quality Attributes. This was based on chapter 14 of the book Software Requirements by Karl Wiegers and Joy Beatty. Additionally, a related article by Karl Weigers was also referenced.

Additionally, acknowledgement has to be given to the following people, collaborators, reviewers, and editors without whom this article would not be possible.

  • Rafael Santos
  • Jocelyn Cheung
  • Mann Hing Khor
  • Dominic Smith
  • Jason Martin
  • Katie Luke

Related Posts