Bibliography on Non-functional Properties and Requirements

(last updated: 2010-06-24)

This bibliography lists a number of works from the literature dealing with non-functional properties and requirements. As the literature in this field is quite large, obviously the bibliography below is by no means complete. I will occasionally update it, however. Hence, if you feel a particular book or paper should be mentioned, please feel free to send me an email.

The research in the field can be classified into two major categories: work concerning nonfunctional a) requirements, and b) properties of actual software artefacts. While we are mainly interested in the latter area of non-functional properties, we will review some work from the former field, because the insights gained there are also applicable to the theory of non-functional properties. The area of non-functional properties can be further divided into three subareas:

  1. Basic Contract Concepts: The work in this area is concerned with general observations of what is required to specify non-functional properties in a contractual manner. The authors do not make any choices about concrete specification languages or styles, but rather attempt to explain how the concept of design by contract (and variants thereof) can be extended to non-functional properties. Most of the work specifically addresses component-based software.
  2. Characteristic-Specific Approaches: These approaches introduce new, or extend existing, formal description techniques to deal with specific non-functional properties or classes of such properties.
  3. Measurement-Based Approaches: Work in this area makes non-functional measurements (often called characteristics) first-class citizens of specifications, and thus allows any kind of non-functional property to be expressed as long as the underlying measurement can be formalised in the language. This is the approach with the greatest flexibility. At the same time, the high degree of generality makes it harder to make such specifications usable in property-specific analysis techniques.

The above classification focuses mainly on the specification of non-functional properties. The goal of such specifications is, typically, the prediction of properties of systems. Another class of approaches towards such prediction is based on profiling. At the end of this bibliography, we will briefly look at work in this area.

General

The work of Hissam et al. at the Software Engineering Institute at Carnegie Mellon University does not quite fit into this classification. The authors describe Prediction-Enabled Component Technology (PECT), which is a generic concept for combining a component technology with one or more analysis models for non-functional properties. The name PECT stands both for the generic concept and for an individual instantiation of the concept with a concrete component technology (e.g., EJB) and a concrete analysis model (e.g., Software Performance Engineering (SPE)). The authors do not strive for a formal description of the general notions. Instead, they focus on explaining the application of specific analysis techniques to specific component models using their framework. Also, whether their approach is measurement-based or characteristic-specific seems to depend largely on the concrete analysis technique used.

Scott A. Hissam, Gabriel A. Moreno, Judith A. Stafford, and Kurt C.Wallnau. Packaging predictable assembly. In J. Bishop, editor, Proc. IFIP/ACM Working Conf. on Component Deployment (CD 2002), volume 2370 of LNCS, pages 108–126, Berlin, Germany, June 2002. Springer-Verlag.
[HTTP] [BibTeX]

Non-functional Requirements

Chung et al. present a framework for reasoning about design decisions which lead from non-functional requirements to the actual design and eventually the implementation. They use the notion of a softgoal to represent non-functional requirements, which may be imprecise, or subjective. Softgoals are related to each other as well as to operationalisations (representing possible realisations for a softgoal) to drive the software development process. Rationale for design decisions is explicitly recorded in the form of claims.

Lawrence Chung, Brian A. Nixon, Eric Yu, and John Mylopoulos. Non-Functional Requirements in Software Engineering. The Kluwer international series in software engineering. Kluwer Academic Publishers Group, Dordrecht, Netherlands, 1999.
[HTTP] [BibTeX]

Classifications of Non-functional Requirements

Various authors have given classifications of non-functional requirements, which can equally well be applied to non-functional properties. We will start by discussing the classification by Sommerville, which to us is the most comprehensive one and then discuss further classifications focussing on their specific differences.


Sommerville identifies three main classes of non-functional requirements:

  1. Product Requirements: These are requirements directly concerning the software system to be built. They include requirements relevant to the customer---such as usability, efficiency, and reliability requirements---but also portability requirements which are more relevant to the organisation developing the software.
  2. Process Requirements: Sometimes also called organisational requirements, these requirements "[...] are a consequence of organisational policies and procedures." They include requirements concerning programming language, design methodology, and similar requirements defined by the developing organisation.
  3. External Requirements: These requirements come neither from the customer nor from the organisation developing the software. They include, for example, requirements derived from legislation relevant to the field for which the software is being produced.

It is clear that any classification of non-functional requirements that is not based on how these requirements can be elicited, can also be used as a classification of non-functional properties. Thus, simply replacing "requirements" by "properties" in the figure gives a classification of non-functional properties.

Ian Sommerville. Software Engineering (6th edition). Addison-Wesley, 2001. Page 101 ff.
[HTTP] [BibTeX]

Sommerville's classification can be considered incomplete. In particular, some product requirements---for example data quality requirements (accuracy of results, etc.)---cannot easily be included in this classification. For this reason, Bandelow extended the classification to support more classes of product properties.

Dirk Bandelow. Entwicklung einer CQML+-Basisbibliothek. Diplomarbeit, Technische Universität Dresden, February 2004. In German.
[BibTeX]

Malan and Bredemeyer give an introduction to non-functional requirements. Non-functional requirements are classified into constraints and qualities. Constraints are sub-classified into context constraints. Qualities are distinguished into run-time and development-time qualities.

Ruth Malan and Dana Bredemeyer. Defining non-functional requirements. Bredemeyer Consulting, White Paper. http://www.bredemeyer.com/papers.htm, 2001.
[HTTP] [BibTeX]

The ISO Quality of Service Framework provides a simple classification of non-functional requirements in particular those in the area of Quality of Service (mainly product properties in Sommerville's classification).

Information technology – quality of service: Framework. ISO/IEC 13236:1998, ITU-T X.641, 1998.
[BibTeX]

Basic Contract Concepts

Beugnard et al. propose to distinguish four levels of contracts, each level depending on all the lower levels:

  1. Syntactic Level: On this level are contracts describing syntactic interface structures of components. Essentially this covers anything that can be expressed in plain Interface Definition Language (IDL)---that is, interfaces and operation signatures for both used and provided interfaces.
  2. Behavioural Level: On this level the behaviour of the component is specified. Formal techniques usually employed on this level include pre- and post-conditions, invariants, temporal-logic specifications, and so on.
  3. Synchronisation Level: On this level one can additionally conclude contracts about synchronisation properties such as re-entrancy, mutually exclusive access, call protocols, the order of events emitted by the component, and so on.
  4. Quality of Service (QoS) Level: This is the level where contracts on non-functional properties of components reside.

A contract can be negotiated the more flexibly the higher its level. It is obvious that syntactic contracts are as good as cast in stone once an interface has been defined. They form the basis for communication between the components so negotiating about them is all but impossible. On the other hand it is not unreasonable to expect components to be able to provide their services in a range of qualities so that clients can select between them and potentially even perform actual negotiations with bids and counter bids being exchanged between component and client.

Antoine Beugnard, Jean-Marc J´ez´equel, No¨el Plouzeau, and DamienWatkins. Making components contract aware. IEEE Computer, 32(7):38–45, July 1999.
[HTTP] [BibTeX]

Röttger and Aigner as well as Selic point out the importance of specifying the required resources in contracts in particular for real-time properties. They both enhance the structure of component contracts---which hitherto essentially described provided and used properties similar to rely–guarantee specifications as described above---by an explicit description of the resources the component requires from its environment. This leads to a layered system where components on one layer are connected through their used and provided properties and the layers are connected by resource associations between components on different layers.

Simone Röttger and Ronald Aigner. Modeling of non-functional contracts in component-based systems using a layered architecture. In Component Based Software Engineering and Modeling Non-functional Aspects (SIVOES-MONA), Workshop at UML 2002, October 2002.
[BibTeX]
Bran Selic. A generic framework for modeling resources with UML. IEEE Computer, 33(6):64–69, June 2000.
[HTTP] [BibTeX]

Reussner proposed the concept of parametrised contracts, a more formal representation of dependencies between interfaces provided or required by a component. Parametrised contracts capture the dependencies inside a component as opposed to dependencies between components, which are expressed in more conventional contracts. The concept of parametrised contracts has originally been developed for functional specifications, but Reussner et al. have extended this work to also include non-functional properties. Specifically they have shown how reliability analysis can be supported by the specification of parametrised contracts using Markov-chain models. Parametrised contracts explicitly acknowledge the intra-component dependencies between provided and required properties. Moreover, Reussner et al. show for the specific case of reliability that it is important to distinguish between properties inherent to a component implementation and properties which emerge from using the component.

Ralf H. Reussner. Parametrisierte Verträge zur Protokolladaption bei Software-Komponenten. Logos Verlag, Berlin, 2001. In German.
[BibTeX]
Ralf H. Reussner, Iman H. Poernomo, and Heinz W. Schmidt. Contracts and quality attributes for software components. In Wolfgang Weck, Jan Bosch, and Clemens Szyperski, editors, Proc. 8th Int’l Workshop on Component-Oriented Programming (WCOP’03), June 2003.
[BibTeX]
Ralf H. Reussner, Iman H. Poernomo, and Heinz W. Schmidt. Reasoning about software architectures with contractually specified components. In A. Cechich, M. Piattini, and A. Vallecillo, editors, Component-Based Software Quality: Methods and Techniques, volume 2693 of LNCS, pages 287–325. Springer, 2003.
[BibTeX]

Chimaris and Papadopoulos present a quite different notion of contracts. In their view, a contract consists of both a specification of a non-functional property and an aspect (in the sense of Aspect-Oriented Programming (AOP)) that can be used to guarantee that property at runtime.

Avraam Chimaris and George A. Papadopoulos. Implementing QoS aware component- based applications. In Robert Meersman and Zahir Tari, editors, On the Move to Meaningful Internet Systems 2004: CoopIS, DOA, and ODBASE: OTM Confederated Int’l Confs., volume 3291 of LNCS, pages 1173–1189, Agia Napa, Cyprus, October 2004. Springer.
[HTTP] [BibTeX]

Characteristic-Specific Approaches

Demairy et al. define a kind of ADL which can be used to model multimedia applications. Such applications consist of multimedia components which exchange streams of what the authors call 'data frames' via connectors. Components and connectors handle such streams following certain protocols (e.g., PCM, MPEG-2, TCP/IP). Apart from defining a data format (which can be considered a functional property), protocols also define timing behaviour---for example, how long it takes to handle a data frame using this protocol. The paper is concerned with two properties:

  1. Protocol Consistency: This is essentially a functional correctness property which holds if components and connectors are connected in such a way that each component understands the protocol delivered by the adjoining connector and vice versa. Protocols are modelled by an enumeration of protocol identifiers, sets of which are associated with components and connectors. The authors define an ordering relationship over these protocol identifiers which represents compatibility relationships between protocols. This allows them to check any model for protocol consistency.
  2. Timeliness Consistency: The paper covers probabilistic constraints over three characteristics: a) the time between two successive inputs of a data frame, b) synchronisation between streams received or sent on different ports, and c) the end-to-end delay between two ports. The authors define a set of rules which allow them to derive properties of composed systems from properties of their components, if the composition is constructed from so-called "serial" and "parallel" composition only.

The representation of protocols is very simple: A protocol is represented by a unique name, and knowledge of the meaning of timeliness properties is very much implicit. The approach is, therefore, good for its very narrow purposes, but it seems difficult to extend it to even slightly different properties.

Erwan Demairy, Emmanuelle Anceaume, and Val´erie Issarny. On the correctness of multimedia applications. In Proc. 11th Euromicro Conf. on Real-Time Systems (ECRTS’99), York, UK, June 1999. IEEE.
[BibTeX]

Issarny and Bidan present Aster, a system in support of configurable software development. Applications are constructed from components which communicate via buses. Aster enables compatibility checks based on properties required by components and provided by the bus and other system components. Individual properties are modelled as first-order predicates. However, the supported properties are restricted to: synchronisation, message-passing mode, and group size.

Issarny, V.; Bidan, C., "Aster: a framework for sound customization of distributed runtime systems," Distributed Computing Systems, 1996., Proceedings of the 16th International Conference on , vol., no., pp.586-593, 27-30 May 1996
[HTTP] [BibTeX]

Koymans defines Metric Temporal Logic (MTL), which adds new operators, one expressing that a predicate holds in the current state or a future state within a given time interval, and another one stating that a predicate holds in all states within a specific time interval.

On this basis, Leue proposes Probabilistic MTL (PMTL) a probabilistic extension, adding a new operator, which asserts that with probability p a formula will eventually hold. From these basic operators he derives specification patterns for describing certain non-functional properties, such as response time, jitter, or stochastic reliability. Because the non-functional aspects have been tightly integrated with the operators of the language, evaluation of expressions in MTL or PMTL requires special algorithms. Standard evaluation techniques for temporal-logic expressions cannot be reused directly.

Stefan Leue. QoS specification based on SDL/MSC and temporal logic. In G. v. Bochmann, J. de Meer, and A. Vogel, editors, Workshop on Multimedia Applications and Quality of Service Verification, Montreal, 1994.
[HTTP] [BibTeX]
R. Koymans. Specifying real-time properties with metric temporal logic. Real-Time Systems, 2(4):255-299, 1990.
[HTTP] [BibTeX]

Based on an extensive study of the literature containing examples of use of formalisms such as PMTL, but also others, Grunske proposes a catalogue of specification patterns for probabilistic specifications. This catalogue associates each pattern both with a structured English form and with formal representations in CSL, the continuous stochastic logic of Aziz and others.

Lars Grunske. Specification patterns for probabilistic quality properties. In Proceedings of the 30th international Conference on Software Engineering (Leipzig, Germany, May 10 - 18, 2008). ICSE '08. ACM, New York, NY, pages 31-40.
[HTTP] [BibTeX]
Adnan Aziz, Kumud Sanwal, Vigyan Singhal, and Robert K. Brayton. Verifying continuous time markov chains. In R. Alur and T. A. Henzinger, editors, Proc. 8th International Conference on Computer Aided Verification, CAV 96, volume 1102 of LNCS, pages 269–276. Springer, 1996.
[HTTP] [BibTeX]

Timed Automata are an extension of the classic theory of finite automata. They add a notion of dense time that can be sampled through so-called clocks. Clocks can be reset at any transition and store the amount of time that has passed since the last reset. Transitions can be guarded by constraints on clocks, the intuitive meaning being that the transition can only be taken when the associated clock constraints hold. Timed automata are good for modelling, and reasoning about, real-time systems where certain deadlines must be respected. However, the approach cannot be extended to other measurements, as the notion of time has been integrated directly into the semantic domain of the approach.

Rajeev Alur and David L. Dill. A theory of timed automata. Theoretical Computer Science, 126(2):183–235, 1994.
[HTTP] [BibTeX]

In the context of ongoing research on combining CBSE and SPE, Grassi and Mirandola present a specification language tuned explicitly towards performance analysis of component-based systems. They use Unified Modelling Language (UML) models which they annotate with stereotypes and tagged values according to the UML profile for schedulability, performance and time (SPT) specification to express timing properties of individual operations. An interesting property of the language is that it unifies the concepts of components and resources into one, so that components and resources can all be connected through appropriate connectors. Resource usage is explicitly modelled as calls to services offered by the resource (e.g., statements of the form "CPU: execute five operations"). Then, the authors use activity diagrams to model relevant system states and the control flow (simply called "Flow" in the paper). From this "Flow" the authors can then extract information for performance analysis. Based on this language, Bertolino and Mirandola develop Component-Based SPE (CB-SPE), a component-based extension to SPE. CB-SPE is a process that supports performance analysis of component-based systems. It is structured into two layers: At the component layer, component developers implement components and annotate their interfaces with performance indices according to SPT. At the application layer, system assemblers use these components to construct systems, derive queuing-network models, and analyse the system performance based on these models. We classify this approach as a characteristic-specific approach, because it is specifically tuned for performance evaluation. However, the approach has some features of a measurement-based approach. In particular, the performance indices are measurements themselves, which are modelled in the semantic framework provided by SPT.

Vincenzo Grassi and Raffaela Mirandola. Towards automatic compositional performance analysis of component-based systems. In Jozo Dujmovic, Virgilio Almeida, and Doug Lea, editors. Proc. 4th Int’l Workshop on Software and Performance WOSP 2004, California, USA, January 2004. ACM Press., pages 59–63.
[HTTP] [BibTeX]
Antonia Bertolino and Raffaela Mirandola. Towards component based software performance engineering. In Proc. 6th Workshop on Component-Based Software Engineering: Automated Reasoning and Prediction at ICSE 2003, pages 1–6. ACM/IEEE, May 2003.
[BibTeX]
Antonia Bertolino and Raffaela Mirandola. Software performance engineering of component-based systems. In Jozo Dujmovic, Virgilio Almeida, and Doug Lea, editors. Proc. 4th Int’l Workshop on Software and Performance WOSP 2004, California, USA, January 2004. ACM Press., pages 238–242.
[HTTP] [BibTeX]
Object Management Group. UML profile for schedulability, performance, and time specification. OMG Document, March 2002.
[HTTP] [BibTeX]

Menascé et al. present a framework for managing response time and throughput of components based on negotiations. Clients issue a session request, which a component can accept or reject. Upon accepting a request, a component may reply with a counter offer. In such a counter offer, components are only allowed to change the number of parallel requests they are prepared to handle. Decisions about accepting or rejecting a session requests are made based on the results of a queuing-network analysis. The authors show a case study of a Java components with quite good results.

Menascé, D. A., Ruan, H., and Gomaa, H. 2004. A framework for QoS-aware software components. In Proceedings of the 4th international Workshop on Software and Performance (Redwood Shores, California, January 14 - 16, 2004). WOSP '04. ACM, New York, NY, pages 186-196.
[HTTP] [BibTeX]

Fischer and de Meer present a probabilistic extension of petri nets for modelling and optimising decision strategies in QoS management systems. Every transition in a petri net is additionally associated with a probability indicating how likely the transition will be fired whenever it is enabled. On this basis, an extended form of Markov Reward Models is computed and used to select optimal strategies for the system.

S. Fischer, H. de Meer, "QoS Management: A Model-Based Approach," mascots, p. 205, Sixth IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunications Systems (MASCOTS'98), 1998
[HTTP] [BibTeX]

David Snowdon et al. at NICTA in Australia work on models for the energy needs of applications. Based on measurements on energy consumption of CPU, memory and bus IO, they have constructed mathematical models that allow to determine the overall energy consumption of an application in dependence of voltage and clock frequency settings. An interesting result of their work is that lowering clock frequencies below a certain value will actually increase overall energy consumption of an application. The specific optimal frequency is application specific. Furthermore, their work allows to trade off energy consumption against performance.

David C. Snowdon, Stefan M. Petters and Gernot Heiser. Power measurement as the basis for power management. Proceedings of the 1st Workshop on Operating System Platforms for Embedded Real-Time Applications, Palma, Mallorca, Spain, July, 2005
[HTTP]
David C. Snowdon, Sergio Ruocco and Gernot Heiser. Power Management and Dynamic Voltage Scaling: Myths and Facts. Proceedings of the 2005 Workshop on Power Aware Real-time Computing, New Jersey, USA, September, 2005
[HTTP]
David C. Snowdon, Stefan M. Petters and Gernot Heiser. Accurate On-line Prediction of Processor and Memory Energy Usage Under Voltage Scaling. Proceedings of the 7th International Conference on Embedded Software, Salzburg, Austria, October, 2007.
[HTTP]

Finally, we discuss research that can be seen as situated on the border between characteristic-specific and measurement-based approaches. Because error functions are a prominent element of this research we will refer to it as the error-function–based approach. Staehli presented, in his dissertation, a technique for specifying the QoS properties of multimedia systems. Staehli distinguishes three view points: content, view, and quality specifications. A content specification is a constraint on the possible data values at each point in a three dimensional space. The three dimensions are: x and y, the planar dimensions of images (not used when specifying audio data), and t the time. Staehli defines various operators such as scale or clip which allow specifiers to construct content specifications by applying transformations to an original data source. A view specification describes how the logical space of content specifications is mapped to the space of physical devices. While this mapping considers scaling because of logical application requirements (e.g., viewing only a selected area of the frames, or rendering a video faster or slower than its original timing), it does explicitly not consider issues of discretisation or limited resource capacity. Therefore, the view specification defines an ideal presentation which could only be achieved on an non-discrete device with unlimited resources. In order to render multimedia content on an actual device, the system computes a presentation plan. This results in an actual presentation. The quality of a presentation is then given by the "distance" between actual and ideal presentation. To calculate this distance, Staehli first observes that it is impossible to uniquely derive the ideal presentation behind an actual presentation without additional information. The information missing is given through an error model, defining the possible ways in which the actual presentation can deviate from the ideal. Staehli defines error functions which can determine measurements such as jitter or shift in an actual presentation. An error model is a set of such error functions which are used together to model the quality of a multimedia presentation. Based on this work, Staehli, together with Eliassen, Aagedal, and Blair, proposed a QoS semantics for component-based systems. This article follows a very similar line, modelling an ideal execution of a system on a machine with unlimited resources and an actual execution on an actual system. Again, an error model is used to describe the perceived quality of the actual execution. The properties supported by this semantics are timeliness and data-quality properties; that is, properties such as response time or accuracy of results. Although the paper has component-based systems in its title, it remains unclear what is specific to component-based systems about the approach. While both applications of Staehli’s approach have been very characteristic specific, the principle seems to be sufficiently general to work for other characteristics, too. The error functions are, in essence, a model of individual measurements, so a generalisation should not be too difficult. Therefore, it would be possible to argue that this approach is in fact measurement-based. We still classify it as characteristic-specific, because so far the authors have not attempted to generalise it into a purely measurement-based approach. However, we strongly consider this a border-line case.

Richard Staehli. Quality of Service Specification for Resource Management in Multimedia Systems. DPhil thesis, Oregon Graduate Institute of Science & Technology, 1996.
[BibTeX]
Richard Staehli, JonathanWalpole, and David Maier. Quality of service specification for multimedia presentations. Multimedia Systems, 3(5/6), November 1995.
[BibTeX]
Richard Staehli, Frank Eliassen, Jan Øyvind Aagedal, and Gordon Blair. Quality of service semantics for component-based systems. In Middleware 2003 Companion, 2nd Int’l Workshop on Reflective and Adaptive Middleware Systems, 2003.
[BibTeX]

Measurement-Based Approaches

The approaches combined under this heading all make characteristics first-class citizens of a specification; that is, they allow characteristics to be defined as part of a specification. We call them measurement-based, because characteristics as defined by these approaches are essentially measurements in the sense of measurement theory. Measurement-based approaches can be categorised into two groups:

  1. Predicate-Based Approaches: These approaches use measurements to formulate constraints on the system behaviour. A system either fulfils these constraints or it does not fulfil them, so the underlying semantics is very similar to that of functional specifications: For each system we can decide whether it is a correct implementation of the specification, but over and above that we cannot compare different implementations.
  2. Optimisation-Based Approaches: These approaches deviate from predicate-based approaches in viewing the achievement of non-functional properties (typically called quality in this context) as an optimisation problem. For each system we can still analyse whether it is a correct implementation of a specification, but in addition, we can compare two systems A and B, and, for example, state that A is a better implementation than B. Such statements are of course only valid in relation to some objective function. Objective functions are typically given as utility functions (or value functions) representing users’ or clients’ preferences on different quality combinations.

Another interesting distinction is based on the degree of formality with which the measurements can be defined in the various approaches. We can distinguish two major cases: A first group of approaches defines measurements as functions of some domain without providing a semantic framework relative to which the meaning of each measurement could be formally defined. We say that these approaches have a weak semantics, because measurements are barely more than names for values. The second group of approaches provides a semantic framework---albeit the degree of formality may vary between approaches---and, thus, allows specifiers to define the meaning of measurements formally and precisely. We say that these approaches have a strong semantics.


The basic terms employed in this strand of research have been standardised by the International Standardisation Organisation (ISO) and the International Telecommunication Union (ITU). The most important terms are:

Information technology – quality of service: Framework. ISO/IEC 13236:1998, ITU-T X.641, 1998.
[BibTeX]

Predicate-Based Approaches

One of the earliest works that proposes a measurement-based specification language for non-functional properties of component-based systems has been written by Xavier Franch. It proposes a language called NoFun. The main concept of this language is the non-functional attribute. Franch distinguishes basic attributes and derived attributes. While derived attributes are formally specified in terms of other (basic or derived) attributes, basic attributes are not formally specified. They remain names for values, their semantics can only be expressed outside NoFun. Franch’s approach therefore is an approach with weak semantics. Nonetheless it already contains many of the concepts found in modern predicate-based approaches.

Xavier Franch. Systematic formulation of non-functional characteristics of software. In Proc. 3rd Int’l Conf. on Requirements Engineering, pages 174–181. IEEE Computer Society, 1998.
[BibTeX]

Abadi and Lamport present an approach which integrates time as a flexible variable into temporal logic specifications. Although this approach is limited to the expression of timeliness properties, we classify it as a measurement-based approach, because the individual measurements are explicitly modelled as part of the specification (using normal flexible variables of the specification language), and are thus first-class citizens. Also, the approach can be extended to arbitrary measurements. Abadi and Lamport use standard temporal logic as their formal framework in which they also define their measurements. We can, therefore, classify them as an approach with a strong semantics.

Martin Abadi and Leslie Lamport. An old-fashioned recipe for real time. ACM ToPLaS, 16(5):1543–1571, September 1994.
[HTTP] [BibTeX]

In his thesis, Aagedal defines Component Quality Modelling Language (CQML), a specification language for non-functional properties of component-based systems. The definition remains largely at the syntactic level, semantic concepts are mainly explained in plain English without formal foundations. The language is based on the ISO definitions. Arbitrary measurements can be defined as quality characteristics, which have a domain and a semantics given in a values clause. The approach has a strong semantics by our definition of the term, even though the degree of formality of the semantic framework is comparatively low. Röttger and Zschaler have proposed a more explicit representation of the semantic framework in previous work.

Jan Øyvind Aagedal. Quality of Service Support in Development of Distributed Systems. PhD thesis, University of Oslo, 2001.
[BibTeX]
Simone Röttger and Steffen Zschaler. CQML+: Enhancements to CQML. In Jean-Michel Bruel, editor. Proc. 1st Int’l Workshop on Quality of Service in Component-Based Software Engineering, Toulouse, France. Cepadues-Editions, June 2003, pages 43–56.
[BibTeX]
Simone Röttger and Steffen Zschaler. Tool Support for Refinement of Non-functional Specifications. Software and Systems Modeling Journal (SoSyM) 6(2), Springer, 2007
[HTTP] [BibTeX]

The UML has developed into a well-accepted language for specifying software systems. Consequently, several researchers have investigated using UML to model measurements and non-functional properties of software. Most important among these approaches is probably the UML SPT profile, which is based on ideas previously presented by Selic. This standard profile defines a meta-model for the specification of performance- and scheduling-related parameters in UML models. Although it is comparatively flexible, and not specific to one characteristic, it does not consider issues related to CBSE, such as independent development of components and applications, or runtime management of resource allocation and component usage by component runtime environments.

Bran Selic. A generic framework for modeling resources with UML. IEEE Computer, 33(6):64–69, June 2000.
[HTTP] [BibTeX]
Object Management Group. UML profile for schedulability, performance, and time specification. OMG Document, March 2002.
[HTTP] [BibTeX]

Another interesting approach has been chosen by Skene et al.. They present SLAng a language for precisely specifying Service Level Agreements (SLAs). Their work is based on the precise UML (pUML) definition of the semantics of UML. There, UML-like (meta-)models are used to specify both the syntax and the semantics of a modelling language. SLAng leverages the flexibility inherent to such a meta-modelling approach to allow specifiers to define measurements of their own, complete with a tailor-made semantic domain and semantic mapping. Because it uses UML as its semantic framework, it has a strong semantics. Because the semantics of UML itself is not formally defined, the degree of formality of SLAng definitions remains very low.

James Skene, D. Davide Lamanna, and Wolfgang Emmerich. Precise service level agreements. In Proc. 26th Int’l Conf. on Software Engineering (ICSE’04), pages 179–188, Edinburgh, Scotland, May 2004. IEEE Computer Society.
[HTTP] [BibTeX]
Tony Clark, Andy Evans, and Stuart Kent. Engineering modelling languages: A precise meta-modelling approach. In R.-D. Kutsche and H. Weber, editors, Proc. 5th Int’l Conf. on Fundamental Approaches to Software Engineering (FASE 2002), volume 2306 of LNCS, pages 159–173, Grenoble, France, April 2002. Springer.
[BibTeX]

In his dissertation, Zschaler provides a semantic framework for the specification of non-functional properties of component-based systems. The framework is based on temporal logic of actions and generalises Abadi/Lamport's approach discussed above to arbitrary product properties. An important distinction is that between intrinsic measurements---that can be determined directly by inspecting a component---and extrinsic measurements that can only be determined when a component is used in a specific context.

Steffen Zschaler: A Semantic Framework for Non-functional Specifications of Component-Based Systems. Dissertation submitted to Technische Universität Dresden, Germany. Published as "Non-functional Specifications of Components and Systems: A Generic Semantic Framework and Its Applications" with VDM Verlag Dr. Müller in July 2008. ISBN: 978-3639054026
[HTTP] [BibTeX]
Steffen Zschaler: Formal Specification of Non-functional Properties of Component-Based Software Systems: A Semantic Framework and Some Applications Thereof. Software and Systems Modelling Journal (SoSyM), available online-first from SpringerLink, 2009.
[HTTP] [BibTeX]

Troya and Vallecillo present a similar approach as an extension of their eMotions environment for the development of domain-specific languages. Behavioural semantics of such languages are specified using in-place model transformation rules. Non-functional properties are then specified by adding additional 'Observer' objects and extending the basic semantics rules by updating values of these objects. Differently from Zschaler above, this work requires the rules of the target language/system to be modified invasively to add in the semantics of updating observer objects. The semantics of observers is, thus, defined separately from them and not easily reused. Duran, Zschaler, and Troya present a technique combining the two approaches to allow modular specification of observers in e-Motions.

Javier Troya, José E. Rivera, Antonio Vallecillo. "Simulating Domain Specific Visual Models by Observation". In Proc. of the Symposium on Theory of Modeling and Simulation (DEVS'10), pp. 46-53. Orlando, FL, US. April 11-15, 2010.
[HTTP] [BibTeX]
Francisco Duran, Steffen Zschaler, and Javier Troya: On the Reusable Specification of Non-functional Properties in DSLs. to appear in Proc. SLE 2012

Optimisation-Based Approaches

Liu et al. present a task-based model to describe QoS properties of applications. The tasks are considered to be so-called flexible tasks that "[...] can trade the amounts of time and resources [they] require to produce [their] results for the quality of the results [they] produce." Each task is described by a reward profile, which relates the quality of incoming data, the quality of data produced, and the amount of resources used while processing. Resource demand is considered only where it can be adjusted during execution. The model is completely oriented towards adaptation, admission control is not considered. If tasks are composed to form applications, they interact in a producer–consumer pattern. Consumers formulate their expectations on quality of incoming data using value functions; that is, objective functions over relevant quality measurements. A QoS management system then strives to allocate resources to tasks such that the value functions of corresponding consumers are maximised. The approach uses a weak semantics of measurements.

J[ane] W. S. Liu, K[lara] Nahrstedt, D[avid] Hull, S[higang] Chen, and B[aochum] Li. EPIQ QoS characterization. ARPA Report, Quorum Meeting, July 1997.
[BibTeX]

Sabata et al. present a task-based model. System specifications are composed from metrics and policies, and are written from three perspectives:

  1. Application Perspective: In this perspective one specifies the properties of one application without considering other applications, which might contend for the same resources. The specification uses metrics, which are essentially measurement definitions, and benefit functions---objective functions used to formulate constraints over metrics.
  2. Resource Perspective: This perspective serves to determine the total resource demand for each individual resource.
  3. System Perspective: In this perspective one specifies how resource conflicts between different applications can be resolved.

Again, the approach uses a weak semantics. However, the authors provide a classification of different types of metrics, so that some additional information about the semantics of a measurement can be derived from its placement in this classification.

Bikash Sabata, Saurav Chatterjee, Michael Davis, Jaroslaw J. Sydir, and Thomas F. Lawrence. Taxonomy for QoS specifications. In Proc. 3rd Int’lWorkshop on Objectoriented Real-Time Dependable Systems (WORDS’97), Newport Beach, California, February 1997.
[BibTeX]

In his dissertation, Lee presents another approach to modelling non-functional properties of applications and systems as an optimisation problem. In contrast to the two approaches described before, this approach does not consider the internal structure of applications, but is only concerned with balancing the resource allocation to applications contending for shared resources. The approach also features a weak semantics, defining measurements (called quality dimensions) as name–value pairs. For each measurement, the author defines an ordering relationship over the value domain. Resource demand and resource allocations are also simplified to name–value pairs. For each application Lee defines a resource profile as a relationship between allocated resources and delivered quality. The quality specification of an application is given by a task profile, the main part of which is a utility function representing the desired quality to be produced by this application. These utility functions are then combined in a weighted sum to form the system utility. The system utility is the global objective function to be maximised by allocating resources to applications. Lee has developed several algorithms to solve such optimisation problems efficiently and with sufficient accuracy.

Chen Lee. On Quality of Service Management. PhD thesis, Carnegie Mellon University, August 1999.
[BibTeX]

Profiling-Based Approaches to Prediction

There is a very large set of work on profiling and its use in the prediction of resource usage or of non-functional properties of applications on particular platforms. As our focus, so far, has been on the specification of non-functional properties, here we will reference only one work as a representative of this large field of research. The references from this paper should give the interested reader a good start into further literature from this field.


Shimizu and colleagues present a profiling-based approach to modelling the resource consumption of applications under different workloads and on different platforms. Their approach combines results from observations of an application on different platforms and under different circumstances and combines these into a regression-based model of resource consumption. This can then be used to predict resource usage even on previously unseen platforms. Because the approach is purely based on observation of runtime behaviour, the authors claim it to be agnostic of specific application and platform semantics. They report prediction errors in the range of 6-24%, which seem to depend on the specific application measured. This seems to indicate to me that some dependence on application semantics still exists. Apart from its actual content, the paper also provides a substantial review of related work.

Shuichi Shimizu, Raju Rangaswami, Hector A. Duran-Limon, and Manuel Corona-Perez. Platform-Independent Modeling and Prediction of Application Resource Usage Characteristics. Journal of Systems and Software 82(12):2117–2127, 2009.
[HTTP] [BibTeX]