Digital Edition

SYS-CON.TV
Software Engineering in Startup Companies
Software Engineering in Startup Companies

The discussion about software engineering in the special environment of startup companies continues with a focus on the software life cycle model and the tracking of requirements.

Software Life Cycles
According to classical software engineering (SE), the development of software takes place in stages. Each stage has distinct outputs, which can be tested before you proceed to the next stage. They are:

  • Analysis: The problem and requirements for a solution are identified. Main output: Software requirements document.
  • Design: A software system is designed to fulfill the previously identified requirements. Main output: High- and low-level design documents.
  • Implementation/coding: The software system is implemented according to the previously defined design. Main output: Source code.
  • Testing: Individual components as well as the entire system are tested for fulfillment of the requirements identified during the analysis stage. Main output: Test results.

    Numerous models that describe the arrangement of the individual stages and the feedback among them have been suggested. These are called the software life cycle models. Some examples are the waterfall model, spiral model and incremental model, which are thoroughly discussed in SE literature (for example, Software Engineering: A Practitioner's Approach, 4th ed., by R. Pressman, McGraw-Hill).

    Consciously following a life cycle model lends structure to an otherwise amorphous effort. When you can identify the end of a stage, you know the time has come to perform specific tests, tests that enable you to find errors at an early stage in the development process. A major design flaw that can be fixed with just a stroke of a pen during the design stage may require major recoding if discovered when the software's almost finished. It's therefore important to perform these tests not just at the end of the development effort, but rather from the beginning and throughout the process. A lifecycle model facilitates this.

    By testing the output of a stage, you provide a well-understood and firm foundation for the team to build on. Once such a foundation is set, it's not supposed to change. In the ideal case all team members know what to achieve next, since this was set forth in the previous stage in a nonambiguous manner.

    In a startup company, however, the software life cycle is usually not well ordered. Markets develop swiftly, and requirements change even long after the analysis stage has supposedly been completed. Time and time again the engineering department finds itself under pressure to do whatever it takes to provide new features originally not planned.

    Is there a lifecycle model that not only works under these conditions but also helps to improve them? Of the many models developed, the incremental model seems to lend itself most closely to the way a startup company operates, but it requires a few modifications.

    As you can see in Figure 1, individual releases of the software are developed in a "pipelined" fashion. In theory this allows the rapid release of new features for your software. The incremental model works well for conventional companies operating in established markets, which use it to reduce the complexity of an individual release. Many of the features for the next releases are already known through market observation, feedback from customers of other products, established marketing channels and so forth. The more established companies also have the resources to maintain multiple parallel development streams.

    The startup reality renders this model impractical. Hiring qualified personnel is particularly difficult for a startup. It's unrealistic to assume that you'll hire a team of experienced analysts at the very beginning, followed first by designers and then by developers. In theory, software engineers should be able to handle all phases of product development. Unfortunately, the proliferation of this title throughout the industry has greatly reduced its value. Many people who call themselves software engineers really don't have a thorough software engineering education and often their experience is only in coding and maybe some design. I am in the same situation and am still learning. So while you can find many software engineers, those with the necessary skills, training and experience for all product-development stages are few and far between.

    The overall head count in your company is likely to be very low for an initial period, only to increase quite rapidly later on. Thus, in the beginning, each developer is also in the position of analyst as well as designer. Obviously, given the lack of personnel, you may not be able to do the analysis for the next release during the design phase of the first release. You have neither enough resources nor sufficient market feedback to begin the development cycle of the second release right away. After all, you haven't even released the first version of your product. Occasional feedback is passed on to you by marketing and sales, gathered from discussions with potential customers. But you won't get true customer feedback until you've shipped the first version to beta customers. Compared to established markets that provide a brightly illuminated playing field, a startup operates in the dark.

    During analysis - and design - you may have to perform research to prove technical concepts or ideas on which you plan to base your product. This may be done in the form of a prototype, which provides feedback for the analysis and design stage, adding complexity to the initial development stages.

    Quickly developing markets, initially missing customer feedback and lack of resources as well as analysis and design stages influenced by research lead me to suggest a modified incremental life-cycle model for startup companies.

    As Figure 2 indicates, analysis, research and design are intertwined for the first release. Analysis for the second release begins at a later stage when two conditions have been met:
    1. Requirements for the next release are available.
    2. Enough new developers have been hired to free the most senior developers to work with marketing on the analysis stage of the next release.

    The analysis for the second release starts after enough customer feedback has been collected to get a good feel for what the market wants. Without that feedback there's really no point in attempting to release yet another version of a product that may have had a lukewarm reception the first time around. The feedback is important and therefore needs to be properly analyzed and prioritized. You have to resist the temptation to stuff all requested features into the next release.

    Once you have a product on the market, you'll get a constant stream of requests. Thus, after the initial lag, you can start working on new releases earlier and earlier as staffing permits. The modified incremental life cycle model reflects this reality.

    Also note that the research component becomes less with each subsequent release. The reason is simply that you've established a core technology with the first release from which you'll continue to leverage. Yours is a commercial company, not a research lab. It's important for you not to have too much research in the critical path of your project as you progress, since research can't be scheduled properly.

    By keeping the first release small and simple, you'll receive market feedback sooner. Such early feedback is important to align the company and its product with the market. The longer it takes you to get feedback, the more time you spend developing "blind," in possibly the wrong direction. Provide the core functionality in the first release. The market will let you know in which direction to go. Potential customers are often willing to negotiate now if the fancy feature they want can be promised to them in an upcoming release. This lifecycle model sets you up for quick releases to satisfy customers without having to drastically change the requirements for an ongoing development cycle.

    Tracking Requirements
    As we discussed in the first part of this series (JDJ, Vol. 3, Issue 7), the analysis stage will not be as rigorous as classical SE would suggest. Many requirements are not known at all, or at least are not understood enough to formulate them in a quantifiable manner.

    Yet it's important not to drop a requirement accidentally through simple oversight. Consequently you have to track as many requirements as possible and as completely as possible. The traditional tool used for this task is the "traceability matrix."

    The matrix is essentially a table. The individual requirements are written from top to bottom and hence label the rows. The individual development stages (analysis, design, implementation, testing) are written from left to right and label the columns. Each cell of the table contains a record of where and how the requirement was addressed in that stage. For the development of product documentation, either a similar table should be created or documentation should become an additional column in the matrix.

    At the end of each development stage each requirement should be checked to see whether it has been addressed during that stage. A look at the table will reveal any omissions, which would be very costly to fix in later stages of product development. Such a matrix can save time and money.

    A traceability matrix will provide a company with an important benefit. Since the matrix records the trace of each requirement throughout the development stages, it shows the team which aspects of the product are affected by requirement changes. If the actual design and implementation takes place in a modular fashion, exhibiting low coupling and high cohesion (see the previously mentioned article), chances are that only those aspects of the product mentioned in that feature's matrix row need to be modified. As discussed earlier, flexibility and quick turnaround is a key point, especially for startup companies. The traceability matrix will facilitate such fast reaction times.

    Startups have two particular problems with maintaining a traceability matrix. First, as already mentioned, not all requirements are known and not all are quantified. The requirement itself may thus have to be formulated in a very unspecific manner, making it difficult to fill the matrix cells with precise information. In that case the matrix should still be maintained. Unquantified requirements should be marked and revisited as soon as more information becomes available. When that occurs the matrix will aid in identifying those parts of the product that need to be tested to see whether the modified requirement is still fulfilled.

    The second startup difficulty with traceability matrices is the work required to maintain them. On complex products a very detailed matrix can fill hundreds if not thousands of pages. Clearly, a compromise needs to be made here. For starters, in many cases it doesn't have to be one monolithic matrix covering the whole product. Even though a complete matrix is always recommended to achieve product completeness, a company might choose to have each team maintain its own matrices. The requirements within one subcomponent or project are identified and listed in a matrix. Maintaining such a smaller matrix is naturally a much less resource-intensive task. On the downside, overall product requirements may suddenly be listed in the matrices of several product teams. In that case some communication overhead is required to keep these matrices in sync.

    One might also choose to leave some of the requirements generally undefined and untraced. This is not at all ideal, but may be necessary due to a lack of resources. In that case the matrix should be limited to requirements that somehow have been deemed more critical than others. This method is risky since it again allows some requirements to be forgotten or not to be traceable if a change is required.

    A traceability matrix is a powerful tool to ensure product completeness and a quick trace of a feature's "footprint" within the product. Even though startup companies are likely to compromise on some aspects of the matrix, it's highly recommended to keep it as complete as possible. The payoffs are significant.

    Design, implementation and change control will be the topic of the next installment in this series.

    About Juergen Brendel
    Juergen Brendel is a software architect at Resonate Inc.

  • In order to post a comment you need to be registered and logged in.

    Register | Sign-in

    Reader Feedback: Page 1 of 1



    ADS BY GOOGLE
    Subscribe to the World's Most Powerful Newsletters

    ADS BY GOOGLE

    "MobiDev is a software development company and we do complex, custom software development for everyb...
    Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native a...
    "Akvelon is a software development company and we also provide consultancy services to folks who are...
    In his session at 21st Cloud Expo, Carl J. Levine, Senior Technical Evangelist for NS1, will objecti...
    "Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mis...
    The question before companies today is not whether to become intelligent, it’s a question of how and...
    High-velocity engineering teams are applying not only continuous delivery processes, but also lesson...
    "Cloud Academy is an enterprise training platform for the cloud, specifically public clouds. We offe...
    Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to ...
    "CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in...
    In his session at 21st Cloud Expo, James Henry, Co-CEO/CTO of Calgary Scientific Inc., introduced yo...
    Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are...
    "NetApp is known as a data management leader but we do a lot more than just data management on-prem ...
    While some developers care passionately about how data centers and clouds are architected, for most,...
    Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection...
    "We're focused on how to get some of the attributes that you would expect from an Amazon, Azure, Goo...
    Data scientists must access high-performance computing resources across a wide-area network. To achi...
    "We're developing a software that is based on the cloud environment and we are providing those servi...
    SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22n...
    SYS-CON Events announced today that Telecom Reseller has been named “Media Sponsor” of SYS-CON's 22n...