Seminar 1 (Week 2; 7th October):
Software Maintenance and Evolution
Introduction
Every seminar sheet will be composed of 4 tasks; these are topics covered in the lecture and give
you an opportunity to expand your knowledge. We have included questions that reflect the type of
things you need to be thinking about for success in the module assessment. Reading is provided for
each topic as well as the lecture slides.
Task 1: What is Brooks’ Law and why is it important?
Brooks Law in the application of a software project is the concept that increasing volumes of
engineers in the hopes to reduce the time of completion, will in turn do the opposite and delay the
project. This is due to issues such as the time to ramp up and train staff with the project will in turn
delay the project. Getting engineers familiar with the project, code base and documentation. The
current engineers will have to take time away from their tasks to train and therefore unproductive,
making the project worse. Additionally, it would increase complexity of communication as the more
engineers, the increase of channels of communication.
Brooks Law is important as it can lead to more efficient management of scarce resources and
reduction of potential costs. It is able to highlight importance of planning of a project to reduce the
likelihood of being over schedule or over budget. Additionally presents the necessity of
communication during a project as well as recognised development processes such as agile. Lastly,
all staff needing the appropriate training before being allocated to a project.
Task 2: It is impossible to change the shape of the Bathtub Curve. Discuss.
The bathtub curve is a conceptual tool that is able to show how systems behave overtime
showcasing failure rate over time. The Bathtub curve is important as it has the potential to indicate
whether a system to good or poor. For example, a steep decline in bugs at the start, a long period of
stability and a slow rise in wear-out = good system. A shallow decline on bugs at the start, a short
period of stability and a sharp rise in wra-out bugs = poor system.
Firstly, there are several factors which can be implemented throughout a project which will in turn
change the shape of the bathtub curve. The start of the curve, being the infant mortality rate, with
enough thorough testing and resolution of bugs will in turn make the initial curve deeper rather than
shallow. This will then move onto the constant failure rate which can be maintained at a low level
and lower if methods such as refactoring are used. As time goes by, the system will begin to
deteriorate. It is possible to change the curve, or delaying the increasing failure rate ensuring the
rise is as shallow as possible through corrective maintenance and effective testing and resolution of
bugs.
Therefore, overall it is not impossible to change the bathtub curve, several factors must be
implemented to maintain the project from early.
,Task 3: Appraise the use of defensive programming.
Defensive programming is a technique used when programming to assume the worst for all inputs
which includes but not limited to using exception handling and assertions, standardise error
handling and always testing external API and library references. This is able to assist with the
maintenance of the software as it provides insurance for the future.
Firstly, the code is more robust and therefore errors are trapped early. Additionally, with well-
written code there is fewer potential bugs. This technique conforms to the “fail fast” principle which
gets the bugs out fast.
Using defensive programming provides insurance for the future of the code and prolongs the
systems lifetime. There is less of a build up a maintenance backlog as less maintenance is necessary
with easy to understand code. Lastly, the code can be protected against cyber attacks.
However, the issue with the use of defensive programming includes it being costly and time
inefficient. With more time spend defending code, less time is spent on other potential tasks. With
more time spent attempting to prevent bugs, there is still no guarantee that there will be no bugs in
the code. Lastly, some code such as legacy code is too bad beyond hope and therefore defensive
programming would be pointless.
Task 4: What are the advantages and disadvantages of Mob Programming?
Mob programming is a software development approach which involves utilising a whole team to
work on the same task.
The advantages of this being that there is more resources available to solve the problem. With more
minds put together, problems are likely to be solved at a faster rate. Additionally, with a group of
individuals together, more ideas and creativity will be flowing as different individuals have different
levels of experiences. The likelihood of a higher quality product is increased, with more people the
likelihood of bugs and errors is reduced. Mob programming is seen as beneficial for tasks which are
critical, with a whole team involved, working on a task with no room for errors or mistakes.
However, this technique is very costly as it is not a productive or efficient allocation of resources.
Other issues may include communication being impacted, a group of people working on the same
task may result in unheard voices or ideas which can impact the task at hand. Organisation may be
difficult as the team must all be present to work at the same time. Lastly all individuals involves must
have good technical and visual support to ensure everyone can see what is happening and why.
Reading
Brook’s Law:
https://stevemcconnell.com/articles/brooks-law-repealed/
https://codescene.com/blog/visualize-brooks-law/
Bathtub Curve for systems:
https://www.linkedin.com/pulse/20140723115956-15133887-the-software-bathtub-curve-
understanding-the-software-systems-lifecycle
,Lehman’s Laws
https://researcher.watson.ibm.com/researcher/view_group.php?id=7296
For a much longer and in-depth read:
https://www.researchgate.net/publication/259979752_An_Empirical_Study_of_Lehman's_Law_on_
Software_Quality_Evolution/link/02e7e52ee52794d397000000/download
Technical Debt:
https://martinfowler.com/bliki/TechnicalDebt.html
Software Crisis
https://en.wikipedia.org/wiki/Software_crisis
Seminar 2 (Week 3) 14th October:
Analysing code-based metrics
Introduction
Every seminar sheet will be composed of 4 tasks; these are topics covered in the lecture and give
you an opportunity to expand your knowledge. We have included questions that reflect the type of
things you need to be thinking about for success in the module assessment. Reading is provided for
each topic as well as the lecture slides.
Task 1: What do you think is the value if using software metrics to measure code?
The value of using software metrics to measure code encourages us to improve, establishing quality
targets and aims to improve this include being able to identify when extra resources are needed in a
system and where. For example, a class is causing disproportionately more bugs than others.
Measurements include the productivity of a developer (LOC per day) which therefore helps us to
understand the project deeper and allows us to have more control factors.
Additionally, allows us to predict outcomes such as using historical data to predict the burden of
maintenance and change processes. Guiding decision making, whether re-engineering part of the
code is necessary.
Lastly, it can help narrow the cone of uncertainty by making our estimates more precise and
reducing the risk of over or under estimating.
Task 2: Discuss one disadvantage of any four of C&K’s metrics of your choice.
Weighted Methods per Class (WMC), Coupling Between Objects (CBO), Lack of Cohesion amongst
the Methods of a class (LCOM) and Depth in the Inheritance Tree of a class (DIT) are three metrics in
Chidamber and Kemerer’s OO metrics suite.
, The Lack of Cohesion amongst the Methods of a class (LCOM) is a measure of the cohesion of a class.
It is calculated by considering the presence of class variables in the methods of that class. A
disadvantage is that it assumes that a class being cohesive follows a neat pattern of variable usage in
its methods which very rarely happens in reality.
________
The WMC is simply a count of the number of methods in a class. The weighted part was dropped in a
later version of the metric. One disadvantage of this metric is that it counts all methods as the same,
where in practice we might want to distinguish between different types of method (constructor,
getter, etc).
The CBO is a measure of the number of classes that each class is associated with (i.e., it either
depends on classes or is depended upon by other classes). One disadvantage of the CBO is that is
doesn’t distinguish between incoming and outgoing couplings, which could be vital knowledge when
tackling excessive coupling.
The DIT is the position of a class in the inheritance hierarchy, with Java Object at position zero (for
Java). A class inheriting directly from Object will have DIT of 1. One disadvantage of the DIT metric is
that it is difficult to say what a high DIT value is and what a low DIT value is; it depends on the
application domain.
Task 3: “Small classes are less likely to be fault-prone than larger ones.” To what extent do you agree
with this statement?
Firstly, it can be seen that smaller classes are less likely to be fault-prone than larger ones. This is due
to the likelihood of larger classes resulting in most complexity. With a larger class, issues relating to
code smells can arise and will take time, effort and is costly to change. For example the use of the
LOC metric which would suggest that the more lines of code, the increase in likelihood of faults or
bugs. Larger classes can be seen as potentially doing too much or having too many methods,
reducing cohesion.
However, this metric is not very useful. For example, a small class can include coupling which in itself
significantly makes the class prone to faults. Additionally, it is unknown what the measure of a small
class is and if the class has not been tested properly. A poorly tested class will be fault prone
regardless of size large or small. Therefore the size of the class may not impact whether it is more
fault-prone. It can also be argued that the experience of the developer behind the code is more
significant than the size of the code determining whether it is more likely to be fault-prone. A highly
skilled developer may write a large class with minimal faults yet an inexperienced developer may
write a small class with it being more fault-prone.
Overall, I believe that class sizes are important relating to issues with reducing code smells, however
does not correlate with being fault-prone as issues of complexity can be seen as more significant,
especially with many companies measuring functionality rather than the code length.
Task 4: Why are test-based metrics important?
Test based metrics is the collection of information on the test process.