PhD/PostDoc Vacancies in Persistent Code Reviews

logo-nwo

In the fall 2015 we are starting a brand new project that we titled Persistent Code Reviewing, funded by NWO. If you’re into code reviews, software quality, or software testing, please consider applying for a position as PhD student or Postdoc within this project.

To quote the abstract of the project proposal:

Code review is the manual assessment of source code by human reviewers. It is mainly intended to identify defects and quality problems in code changes before deployment in production. Code review is widely recommended: Several studies have shown that it supports software quality and reliability crucially. Properly doing code reviews requires expensive developer time and zeal, for each and every reviewed change.

The goal of “Persistent Code Reviews” project is to make the efforts and knowledge that reviewers put in a code review available outside the code change context to which they are directed.

Naturally, given my long term interest in software testing, we will include any test activities (test design and execution, test adequacy considerations) that affect the reviewing process in our analysis.

The project is funded by the Top Programme of NWO, the Netherlands Organization for Scientific Research.

Within the project, we have openings for two PhD students and one postdoctoral researcher. The research will be conducted at the Software Engineering Research Group (SERG) of Delft University of Technology in The Netherlands. At SERG, you will be working in a team of around 25 researchers, including 6 full time faculty members.

In this project you will be supervised by Alberto Bacchelli and myself. To learn more about any of these positions, please contact one of us.

Requirements for all positions include:

  • Being a team player;
  • Strong writing and presentation skills;
  • Being hungry for new knowledge in software engineering;
  • Ability to develop prototype research tools;
  • Interest in bringing program analysis, testing, and human aspects of software engineering together.

To apply, please send us an application letter, a CV, and (pointers) to written material (e.g. a term paper or an MSc thesis for applicants for the PhD positions, and published papers or the PhD thesis for the postdoc).

We are in the process of further distributing this announcement: Final decisions on the appointments will be made end of October.

We look forward to receiving your application as soon as possible.

In Vivo Software Analytics: PhD/Postdoc positions

Last week, we had the kickoff of a new project we are participating in addressing “In Vivo Software Analytics”. In this project, called “Big Software on the Run” (BSR) we monitor the quality of software in its “natural habitat”, i.e., as it is running in the wild. The project is a collaboration between the three technical universities (3TU) of The Netherlands (Eindhoven, Twente, Delft).

In Vivo Software Analytics

To quote the 3TU.BSR plan:

Millions of lines of code – written in different languages by different people at different times, and operating on a variety of platforms – drive the systems performing key processes in our society. The resulting software needs to evolve and can no longer be controlled a priori as is illustrated by a range of software problems. The 3TU.BSR research program will develop novel techniques and tools to analyze software systems in vivo – making it possible to visualize behavior, create models, check conformance, predict problems, and recommend corrective actions.

Essentially, we propose to address big software by applying big data techniques to system health information obtained at run time. It provides feedback from operations to developers, in order to make systems more resilient against the risks that come with rapid change.

The project brings together some of the best softare engineering and data science groups and researchers of the three technical universities in The Netherlands:

The project is sponsored by NIRICT, the 3TU center for Netherlands Resaerch in Information and Communication Technology.

The project duration is four years. At each of the three technical universities two PhD students and one one postdoc will be employed. To maxize collaboration, each PhD student has two supervisors, from two different universities. Furthermore, the full research team, including all supervisors, PhD students, and postdocs, will regularly visit each other.

Within the Delft Software Engineering Research Group, we are searching for one PhD student and one postdoc to strengthen the 3TU.BSR project team.

The PhD student we are looking for will work on the intersection between visualization and dynamic program analysis. In particular, we are searching for a PhD student to work on log event analysis, and visualization of anomalies and exceptions as occurring in traces of running systems. The PhD student will be jointly supervised by Jack van Wijk and myself.

The postdoctoral researcher we are looking for should be able to establish connections between the various research themes and groups working on the project (such as visualization, process mining, repository mining, privacy-preserving log file analysis, model checking). Thus, we are looking for a researcher who successfully completed his or her PhD thesis, and is open to work with various of the six PhD students within the project. The postdoc will be based in the Software Engineering Research Group.

Requirements for both positions include:

  • Being a team player;
  • Strong writing and presentation skills;
  • Being hungry for new knowledge in software engineering;
  • Ability to develop prototype research tools;
  • Interest in bringing visualization, run time analysis, and human aspects of software engineering together.

To apply, please send me an application letter, a CV, and (pointers) to written material (e.g. a term paper or an MSc thesis for applicants for the PhD position, and published papers or the PhD thesis for the postdoc).

We are in the process of further distributing this announcement: Final decisions on the appointments will be made end of October.

I look forward to receiving your application!

3TU.BSR Tracks

Think Twice Before Using the “Maintainability Index”

Code metrics results in VS2010

This is a quick note about the “Maintainability Index”, a metric aimed at assessing software maintainability, as I recently run into developers and researchers who are (still) using it.

The Maintainability Index was introduced at the International Conference on Software Maintenance in 1992. To date, it is included in Visual Studio (since 2007), in the recent (2012) JSComplexity and Radon metrics reporters for Javascript and Python, and in older metric tool suites such as verifysoft.

At first sight, this sounds like a great success of knowledge transfer from academic research to industry practice. Upon closer inspection, the Maintainability Index turns out to be problematic.

The Original Index

The Maintainabilty Index was introduced in 1992 by Paul Oman and Jack Hagemeister, originally presented at the International Conference on Software Maintenance ICSM 1992 and later refined in a paper that appeared in IEEE Computer. It is a blend of several metrics, including Halstead’s Volume (HV), McCabe’s cylcomatic complexity (CC), lines of code (LOC), and percentage of comments (COM). For these metrics, the average per module is taken, and combined into a single formula:

formula

To arrive at this formula, Oman and Hagemeister started with a number of systems from Hewlett-Packard (written in C and Pacscal in the late 80s, “ranging in size from 1000 to 10,000 lines of code”). For each system, engineers provided a rating (between 1-100) of its maintainability. Subsequently, 40 different metrics were calculated for these systems. Finally, statistical regression analysis was applied to find the best way to combine (a selection of) these metrics to fit the experts’ opinion. This eventually resulted in the given formula. The higher its value, the more maintainable a system is deemed to be.

The maintainability index attracted quite some attention, also because the Software Engineering Institute (SEI) promoted it, for example in their 1997 C4 Software Technology Reference Guide. This report describes the Maintainability Index as “good and sufficient predictors of maintainability”, and “potentially very useful for operational Department of Defense systems”. Furthermore, they suggest that “it is advisable to test the coefficients for proper fit with each major system to which the MI is applied.”

Use in Visual Studio

Visual Studio Code Metrics were announced in February 2007. A November 2007 blogpost clarifies the specifics of the maintainability index included in it. The formula Visual Studio uses is slightly different, based on the 1994 version:

Maintainability Index =
  MAX(0, (171 - 5.2 * ln(Halstead Volume)
             - 0.23 * Cyclomatic Complexity
             - 16.2 * ln(Lines of Code)
         ) * 100 / 171)

As you can see, the constants are literally the same as in the original formula. The new definition merely transforms the index to a number between 0 and 100. Also, the comment metrics has been removed.

Furthermore, Visual Studio provides an interpretation:

MI >= 20 High Maintainabillity
10 <= MI < 20 Moderate Maintainability
MI < 10 Low Maintainability

 

I have not been able to find a justification for these thresholds. The 1994 IEEE Computer paper used 85 and 65 (instead of 20 and 10) as thresholds, describing them as a good “rule of thumb”.

The metrics are available within Visual Studio, and are part of the code metrics power tools, which can also be used in a continuous integration server.

Concerns

I encountered the Maintainability Index myself in 2003, when working on Software Risk Assessments in collaboration with SIG. Later, researchers from SIG published a thorough analysis of the Maintainability Index (first when introducing their practical model for assessing maintainability and later as section 6.1 of their paper on technical quality and issue resolution).

Based on this, my key concerns about the Maintainability Index are:

  1. There is no clear explanation for the specific derived formula.
  2. The only explanation that can be given is that all underlying metrics (Halstead, Cyclomatic Complexity, Lines of Code) are directly correlated with size (lines of code). But then just measuring lines of code and taking the average per module is a much simpler metric.
  3. The Maintainability Index is based on the average per file of, e.g., cyclomatic complexity. However, as emphasized by Heitlager et al, these metrics follow a power law, and taking the average tends to mask the presence of high-risk parts.
  4. The set of programs used to derive the metric and evaluate it was small, and contained small programs only.
  5. Furthermore, the programs were written in C and Pascal, which may have rather different maintainability characteristics than current object-oriented languages such as C#, Java, or Javascript.
  6. For the experiments conducted, only few programs were analyzed, and no statistical significance was reported. Thus, the results might as well be due to chance.
  7. Tool smiths and vendors used the exact same formula and coefficients as the 1994 experiments, without any recalibration.

One could argue that any of these concerns is reason enough not to use the Maintainability Index.

These concerns are consistent with a recent (2012) empirical study, in which one application was independently built by four different companies. The researchers used these systems two compare maintainability and several metrics, including the Maintainability Index. Their findings include that size as a measure of maintainability has been underrated, and that the “sophisticated” maintenance metrics are overrated.

Think Twice

In summary, if you are a researcher, think twice before using the maintainability index in your experiments. Make sure you study and fully understand the original papers published about it.

If you are a tool smith or tool vendor, there is not much point in having several metrics that are all confounded by size. Check correlations between the metrics you offer, and if any of them are strongly correlated pick the one with the clearest and simplest explanation.

Last but not least, if you are a developer, and are wondering whether to use the Maintainability Index: Most likely, you’ll be better off looking at lines of code, as it gives easier to understand information on maintainability than a formula computed over averaged metrics confounded by size.

Further Reading

  1. Paul Omand and Jack Hagemeister. “Metrics for assessing a software system’s maintainability”. Proceedings International Conference on Software Mainatenance (ICSM), 1992, pp 337-344. (doi)
  2. Paul W. Oman, Jack R. Hagemeister: Construction and testing of polynomials predicting software maintainability. Journal of Systems and Software 24(3), 1994, pp. 251-266. (doi).
  3. Don M. Coleman, Dan Ash, Bruce Lowther, Paul W. Oman. Using Metrics to Evaluate Software System Maintainability. IEEE Computer 27(8), 1994, pp. 44-49. (doi, postprint)
  4. Kurt Welker. The Software Maintainability Index Revisited. CrossTalk, August 2001, pp 18-21. (pdf)
  5. Maintainability Index Range and Meaning. Code Analysis Team Blog, blogs.msdn, 20 November 2007.
  6. Ilja Heitlager, Tobias Kuipers, Joost Visser. A practical model for measuring maintainability. Proceedings 6th International Conference on the Quality of Information and Communications Technology, 2007. QUATIC 2007. (scholar)
  7. Dennis Bijlsma, Miguel Alexandre Ferreira, Bart Luijten, and Joost Visser. Faster Issue Resolution with Higher Technical Quality of Software. Software Quality Journal 20(2): 265-285 (2012). (doi, preprint). Page 14 addresses the Maintainability Index.
  8. Khaled El Emam, Saida Benlarbi, Nishith Goel, and Shesh N. Rai. The Confounding Effect of Class Size on the Validity of Object-Oriented Metrics. IEEE Transactions on Software Engineering, 27(7):630:650, 2001. (doi, preprint)
  9. Dag Sjøberg, Bente Anda, and Audris Mockus. Questioning software maintenance metrics: a comparative case study. Proceedings of the ACM-IEEE international symposium on Empirical software engineering and measurement (ESEM), 2012, pp. 107-110. (doi, postprint).
Edit September 2014

Included discussion on Sjøberg’s paper, the thresholds in Visual Studio, and the problems following from averaging in a power law.


© Arie van Deursen, August 2014.

Dimensions of Innovation

ECSS Amsterdam

As an academic in software engineering, I want to make the engineering of software more effective so that society can benefit even more from the amazing potential of software.

This requires not just good research, but also success in innovation: ensuring that research ideas are adopted in society. Why is innovation so hard? How can we increase innovation in software engineering and computer science alike? What can universities do to increase their innovation success rate?

I was made to rethink these questions when Informatics Europe contacted me to co-organize this year’s European Computer Science Summit (ECSS 2013) with special theme Informatics and Innovation.

Informatics Europe is an organization of Computer Science Departments in Europe, founded in 2005. Its mission is to foster the development of quality research and teaching in information and computer sciences, also known as Informatics. In its yearly summit, deans and heads of department get together, to share experience in leading academic groups, and to join forces when undertaking new activities at a European level.

When Informatics Europe asked me as program chair of their two day summit, my only answer could be “yes”. I have a long term interest in innovation, and here I had the opportunity and freedom to compile a full featured program on innovation as I saw fit, with various keynotes, panels, and presentations by participants — a wonderful assignment.

In compiling the program I took the words of Peter Denning as starting point: “an idea that changes no one’s behavior is only an invention, not an innovation.” In other words, innovation is about changing people, which is much harder than coming up with a clever idea.

In the end, I came up with the following “Dimensions of Innovations” that guided the composition of the program.

  1. Startups

    Innovation needs optimists who believe they can change the world. One of the best ways to bring a crazy new idea to sustainable success is by starting a new company dedicated to creating and conquering markets that had not existed before.

    Many of the activities at ECSS 2013 relate to startups. The opening keynote is by François Bancilhon, serial entrepreneur currently working in the field of open data. Furthermore, we have Heleen Kist, strategy consultant in the area of access to finance. Last but not least, the first day includes a panel, specifically devoted to entrepreneurship, and one of the pre-summit workshops is entirely devoted to entrepreneurship for faculty.

  2. Patents

    Patents are traditionally used to protect (possibly large) investments that may be required for successful innovation. In computer science, patents are more than problematic, as evidenced by patent trolls, fights between giants such as Oracle and Google, and the differences in regulations in the US and in Europe. Yet at the same time (software) patents can be crucial, for example to attract investors for a startup.

    Several of the ECSS keynote speakers have concrete experience with patents — Pamela Zave at AT&T, and Erik Meijer from his time at Microsoft, when he co-authored hundreds of patents. Furthermore, Yannis Skulikaris of the European Patent Office will survey patenting of software-related inventions.

  3. Open Source, Open Data

    An often overlooked dimension of innovation are open source and open data. How much money can be made by giving away software, or by allowing other to freely use your data? Yet, many enterprises are immensely successful based on open source and open data.

    At ECSS, keynote speaker Erik Meijer is actively working on a series of open source projects (related to his work on reactive programming). In the area of open data, we have entrpreneur François Bancilhon, and semantic web specialist Frank van Harmelen, who is head of the Network Institute of the Vrije Universiteit in Amsterdam

  4. Teaching for Innovation

    How can universities use education to strengthen innovation? What should students learn so that they can become ambassadors of change? How innovative should students be so that they can become successful in society? At the same time, how can outreach and education be broadened so that new groups of students are reached, for example via on line learning?

    To address these questions, at ECSS we have Anka Mulder, member of the executive board of Delft University of Technology, and former president of the OpenCourseWare Consortium. She is responsible for the TU Delft strategy on Massive On-Line Open Courses (MOOC), and she will share TU Delft experiences in setting up their MOOCs.

    Furthermore, ECSS will host a panel discussion, in which experienced yet non-conformist teachers and managers will share their experience in innovative teaching to foster innovation.

  5. Fostering Innovation

    Policy makers and university management are often at loss on how to encourage their stubborn academics to contribute to innovation, the “third pillar” of academia.

    Therefore, ECSS is a place for university managers to meet, as evidenced by the pre-summit Workshop for Deans, Department Chairs and Research Directors . Furthermore, we have executive board member Anka Mulder as speaker.

    Last but not least, we have Carl-Cristian Buhr, member of the cabinet of Digital Agenda Commissioner and EU Commission Vice-President Neelie Kroes, who will speak about the EU Horizon 2020 programme and its relevance to computer science research, education, and innovation.

  6. Inspriational Content

    All talk about innovation is void without inspirational content. Therefore, throughout the conference, exciting research insights and new course examples will be interwoven in the presentations.

    For example, we have Pamela Zave speaking on The Power of Abstraction, Frank van Harmelen addressing progress in his semantic web work at the Network Institute, and Felienne Hermans on how to reach thousands of people through social media. Last but not least we have Erik Meijer, who is never scared to throw both math and source code in his presentation.

The summit will take place October 7–9, 2013, in Amsterdam. You are all welcome to join!


<

p>
Update:

  • The slides I used for opening ECSS 2013

Some Research Paper Writing Recommendations

Last week, I received an email from Alex Orso and Sebastian Uchitel, who had been asked to give a talk on “How to get my papers accepted at top SE conferences” at the Latin American School on Software Engineering. Here’s their question:

We hope you can spare a few minutes to share with us the key recommendations you would give to PhD students that have not yet had successful submissions to top software engineering conferences, such as ICSE.

An interesting request, and I certainly look forward to receive some of the advice my fellow researchers will be providing you can see the advice of my fellow researchers in a presentation by Alex Orso.

When working with my students on papers, I must admit I sometimes repeat myself. Below are some of the things I hear myself say most often.

Explain the Innovation

The first thing to keep in mind is that a research paper should explain the innovation. This makes it quite different from a text book chapter, or from a hands-on tutorial. The purpose is not to explain a technique so that others can use it. Instead, the purpose of a research paper is to explain what is new about the proposed technique.

Identify the Contributions

Explaining novelty is driven by contributions. A contribution is anything the world did not know before this paper, but which we now do know thanks to this paper.

I tend to insist on an explicit list of contributions, which I usually put at the end of the paper.

“The contributions of this paper are …”

Each contribution is an outcome, not the process of doing something. Contributions are things, not effort. Thus, “we spent 6 months manually analyzing 500,000 commit messages” is not a contribution. This effort, though, hopefully has resulted in a useful contribution, which may be that “for projects claiming to do test-driven development, to our surprise we found that 75% of the code commits are not accompanied by a commit in the test code.”

Usually, when thinking about the structure of a paper, quite a bit of actual research has been done already. It is then important to reassess everything that has been done, in order to see what the real contributions of the research are. Contributions can include a new experimental design, a novel technique, a shared data set or open source tool, as well as new empirical evidence contradicting, confirming, or enriching existing theories.

Structure the Paper

With the contributions laid out, the structure of the paper appears naturally: Each contribution corresponds to a section.

This does not hold for the introductory and concluding sections, but it does hold for each of the core sections.

Furthermore, it is essential to separate background material from own contributions. Clearly, most papers will rely on existing theories or techniques. These must be explained. Since the goal of the paper is to explain the innovation, all material that is not new should be clearly isolated. In this way, it easiest for the reader (and the reviewer) to see what is new, and what is not new, about this paper.

As an example, take a typical structure of a research paper:

  1. Introduction
  2. Background: Cool existing work that you build upon.
  3. Problem statement: The deficiency you spotted
  4. Conceptual solution: A new way to deal with that problem!
  5. Open source implementation: Available for everyone!
  6. Experimental design for evaluation: Trickier than we thought!
  7. Evaluation results: It wasn’t easy to demonstrate, but yes, we’ve good evidence that this may work!
  8. Discussion: What can we do with these results? Promising ideas for future research or applications? And: a critical analysis of the threats to the validity of our results.
  9. Related work
  10. Concluding remarks.

In such a setup, sections 4-7 can each correspond to a contribution (and sometimes to more than one). The discussion section (8) is much more speculative, and usually does not contribute solid new knowledge.

Communicate the Contributions

Contributions not just help in structuring a paper.

They are also the key aspect program committees look at when deciding about acceptence of a paper.

When reviewers evaluate a paper, they try to identify, and interpret the contributions. Are these contributions really new? Are they important? Are they trivial? Did the authors provide sufficient evaluations for their claims? The paper should help the reviewer, by being very explicit about the contributions and the claims to fame of these contributions.

When program committee members discuss a paper, they do so in terms of contributions. Thus, contributions should not just be strong, they should also be communicable.

For smaller conferences, it is safe to assume that all reviewers are epxerts. For large conferences, such as ICSE, the program committee is broad. Some of reviewers will be genuine experts on the topic of the paper, and these reviewers should be truly excited about the results. Other reviewers, however, will be experts in completely different fields, and may have little understanding of the paper’s topic. When submitting to prestigious yet broad conferences, it is essential to make sure that any reviewer can understand and appreciate the contributions.

The ultimate non-expert is the program chair. The chair has to make a decision on every paper. If the program chair cannot understand a paper’s contributions, it is highly unlikely that the paper will get accepted.

Share Contributions Early

Getting a research paper, including its contributions, right, is hard. Especially since contributions have to be understandable by non-experts.

Therefore, it is crucial to offer help to others, volunteering to read preliminary drafts of papers, assessing the strength of theircontributions. In return, you’ll have other people, possibly non-experts, assess the drafts you are producing, in this way helping each other to publish a paper at this prestigious conference.

But wait. Isn’t helping others a bad idea for highly competitive conferences? Doesn’t it reduce one’s own chances?

No. Software engineering conferences, including ICSE and FSE, accept any paper that is good. Such conferences do not work with accpetance rates that are fixed in advance. Thus, helping each other may increase the acceptance rate, but will not negatively affect any author.

Does This Help?

I hope some of these guidelines will be useful to “PhD students that have not yet had successful submissions to top software engineering conferences, such as ICSE.”

A lot more advice is available on the Internet on how to write a research paper. I do not have a list of useful resources available at the time of writing, but perhaps in the near future I will extend this post with useful additional pointers.

Luckily, this post is not a research paper. None of the ideas presented here is new. But they have worked for me, and I hope they’ll work for you too.


Image credits: Pencils in the Air, by Peter Logan, Photo by Mira66. flickr

Green Open Access and Preprint Linking

ICSE 2013

One of the most useful changes to the ICSE International Conference on Software Engineering this year, was that the program website contained links to preprints of many of the papers presented.

As ICSE is a large event (over 1600 people attended in 2013), it is worth taking a look at what happened. What is preprint linking? How many authors actually provided a preprint link? What about other conferences? What are the wider implications for open access publishing in software engineering?

Self-Archiving

Preprint linking is based on the idea that authors, who do all the work in both writing and formating the paper, have the right to self-archive the paper they created themselves (also called green open access). Authors can do this on their personal home page, in institutional repositories of, e.g., the universities where they work or in public preprint repositories such as arxiv.

Sharing preprints has been around in science since decades (if not ages): As an example, my ‘alma mater’ CWI was founded in 1947, and has a technical report series dating back to that year. These technical reports were exchanged (without costs) with other mathematical research institutes. First by plain old mail, then by email, later via ftp, and now through http.

While commercial publishers may dislike the idea that a free preprint is available for papers they publish in their journals or conference proceedings, 69% of the publishers do in fact allow (some form of) self-archiving. For example, ACM, IEEE, Springer, and Elsevier (the publishers I work most with) explicitly permit it, albeit always under specific conditions. These conditions can usually be met, and include such requirements as providing a note that the paper has been accepted for publication, a pointer to the URL where the published article can be found, and a copyright notice indicating the publisher now owns the copyright.

Preprint links shown on ICSE site.

Preprint links as shown on ICSE site.

Preprint Linking

All preprint linking does, is ask authors of accepted conference papers, whether they happen to have a link to a preprint available. If so, the conference web site will include a link to this preprint in its progam as listed on its web site.

For ICSE, doing full preprint linking at the conference site was proposed and conducted by Dirk Beyer, after an earlier set of preprint links was collected on a separate github gist by Adrian Kuhn.

Dirk Beyer runs Conference Publishing Consulting, the organization hired by ICSE to collect all material to be published, and get it ready for inclusion in the ACM/IEEE Digital Libraries. As part of this collection process, ICSE asked the authors to provide a link to a preprint, which, if provided, was included in the ICSE on line program.

The ICSE 2013 proceedings were published by IEEE. In their recently updated policy, they indicate that “IEEE will make available to each author a preprint version of that person’s article that includes the Digital Object Identifier, IEEE’s copyright notice, and a notice showing the article has been accepted for publication.” Thus, for ICSE, authors were provided with a possibility to download this version, which they then could self-archive.

Preprints @ ICSE 2013

With a preprint mechanism setup at ICSE, the next question is how many researchers actually made use of it. Below are some statistics I collected from the ICSE conference site:

Track / Conference #Papers presented #Preprints Percentage
Research Track 85 49 57%
ICSE NIER 31 19 61%
ICSE SEIP 19 6 31%
ICSE Education 13 3 23%
ICSE Tools 16 7 43%
MSR 64 36 56%
Total 228 120 53%

 

In other words, a little over half of the authors (53%) provided a preprint link. And, almost half of the authors decided not to.

I hope and expect that for upcoming ICSE conferences, more authors will submit their preprint links. As a comparison, at the recent FORTE conference, 75% of the authors submitted a preprint link.

For ICSE, this year was the first time preprint linking was available. Authors may have not been familiar with the phenomenon, may not have realized in advance how wonderful a program with links to freely available papers is, may have missed the deadline for submitting the link, or may have missed the email asking for a link altogether as it ended up in their spam folder. And, in all honesty, even I managed to miss the opportunity to send in my link in time for some of my MSR 2013 papers. But that won’t happen again.

Preprint Link Sustainability

An issue of some concern is the “sustainability” of the preprint links — what happens, for example, to homepages with preprints once the author changes jobs (once the PhD student finishes)?

The natural solution is to publish preprints not just on individual home pages, but to submit them to repositories that are likely to have a longer lifetime, such as arxiv, or your own technical report series.

An interesting route is taken by ICPC, which instead of preprint links simply provides a dedicated preprint search on Google Scholar, with authors and title already filled in. If a preprint has been published somewhere, and the author/title combination is sufficiently unique, this works remarkably well. MSR uses a mixture of both appraoches, by providing a search link for presentations for which no preprint link was provided.

Implications

Open access, and hence preprint publishing, is of utmost importance for software engineering.

Software engineering research is unique in that it has a potentially large target audience of developers and software engineering practitioners that is on line continually. Software engineering research cannot afford to dismiss this audience by hiding research results behind paywalls.

For this reason, it is inevitable that on the long run, software engineering researchers will transform their professional organizations (ACM and IEEE) so that their digital libraries will make all software engineering results available via open access.

Irrespective of this long term development, the software engineering research community must hold on to the new preprint linking approach to leverage green open access.

Thus:

  1. As an author, self-archive your paper as a preprint or technical report. Consider your paper unpublished if the preprint is not available.
  2. If you are a professor leading a research group, inspire your students and group members to make all of their publications available as preprint.
  3. If you are a reviewer for a conference, insist that your chairs ensure that preprint links are collected and made available on the conference web site.
  4. If you are a conference organizer or program chair, convince all authors to publish preprints, and make these links permanently available on the conference web site.
  5. If you are on a hiring committee for new university staff members, demand that candidates have their publications available as preprints.

Much of this has been possible for years. Maybe one of the reasons these practices have not been adopted in full so far, is that they cost some time and effort — from authors, professors, and conference organizers alike — time that cannot be used for creative work, and effort that does not immediately contribute to tenure or promotion. But it is time well spent, as it helps to disseminate our research to a wider audience.

Thanks to the ICSE move, there now may be a momentum to make a full swing transition to green open access in the software eningeering community. I look forward to 2014, when all software engineering conferences will have adopted preprint linking, and 100% of the authors will have submitted their preprint links. Let us not miss this unique opportunity.

Acknowledgments

I am grateful to Dirk Beyer, for setting up preprint linking at ICSE, and for providing feedback on this post.

Update (Summer 2013)

David Notkin on Why We Publish

This week David Notkin (1955-2013) passed away, after a long battle against cancer. He was one of my heroes. He did great research on discovering invariants, reflexion models, software architecture, clone analysis, and more. His 1986 Gandalf paper was one of the first I studied when starting as a PhD student in 1990.

December 2011 David sent me an email in which he expressed interest to do a sabbatical in our TU Delft Software Engineering Research Group in 2013-2014. I was as proud as one can be. Unfortunately, half a year later he had to cancel his plans due to his health.

David was loved by many, as he had a genuine interest in people: developers, software users, researchers, you. And he was a great (friendly and persistent) organizer — 3 weeks ago he still answered my email on ICSE 2013 organizational matters.

In February 2013, he wrote a beautiful editorial for the ACM Transactions on Software Engineering and Methodology, entitled Looking Back. His opening reads: “It is bittersweet to pen my final editorial”. Then David continues to address the question why it is that we publish:

“… I’d like very much for each and every reader, contributor, reviewer, and editor to remember that the publications aren’t primarily for promotions, or for citation counts, or such.

Rather, the intent is to make the engineering of software more effective so that society can benefit even more from the amazing potential of software.

It is sometimes hard to see this on a day-to-day basis given the many external pressures that we all face. But if we never see this, what we do has little value to society. If we care about influence, as I hope we do, then adding value to society is the real measure we should pursue.

Of course, this isn’t easy to quantify (as are many important things in life, such as romance), and it’s rarely something a single individual can achieve even in a lifetime. But BHAGs (Big Hairy Audacious Goals) are themselves of value, and we should never let them fade far from our minds and actions.”

Dear David, we will miss you very much.


See also: