Green Open Access FAQ

Green field

Image credit: Flickr, user static_view

(Opinionated) answers to frequently asked questions on (green) open access, from a computer science (software engineering) research perspective.

Disclaimer: IANAL, so if you want to know things for sure you’ll have to study the references provided. Use at your own risk.

Green open access is trickier than I thought, so I might have made mistakes. Corrections are welcome, just as additional questions for this FAQ. Thanks!

Green Open Access Questions

  1. What is Green Open Access?
  2. What is a pre-print?
  3. What is a post-print?
  4. What is a publisher’s version?
  5. Do publishers allow Green Open Access?
  6. Under what conditions is Green Open Access permitted?
  7. What is Yellow Open Access?
  8. What is Gold Open Access?
  9. What is Hybrid Open Access?
  10. What are the Self-Archiving policies of common computer science venues?
  11. Is Green Open Access compulsory?
  12. Should I share my pre-print under a Creative Commons license?
  13. Can I use Green Open Access to comply with Plan S?
  14. What is a good place for self-archiving?
  15. Can I use PeerJ Preprints for Self-Archiving?
  16. Can I use ResearchGate or Academia.edu for Self-Archiving?
  17. Which version(s) should I self-archive?
  18. What does Gold Open Access add to Green Open Access?
  19. Will Green Open Access hurt commercial publishers?
  20. What is the greenest publisher in computer science?
  21. Should I use ACM Authorizer for Self-Archiving?
  22. As a conference organizer, can I mandate Green Open Access?
  23. What does Green Open Access cost?
  24. Should I adopt Green Open Access?
  25. Where can I learn more about Green Open Access?

What is Green Open Access?

In Green Open Access you as an author archive a version of your paper yourself, and make it publicly available. This can be at your personal home page, at the institutional repository of your employer (such as the one from TU Delft), or at an e-print server such as arXiv.

The word “archive” indicates that the paper will remain available forever.

What is a pre-print?

A pre-print is a version of a paper that is entirely prepared by the authors.

Since no publisher has been involved in any way in the preparation of such a pre-print, it feels right that the authors can deposit such pre-prints where ever they want to. Before submission, the authors, or their employers such as universities, hold the copyright to the paper, and hence can publish the paper in on line repositories.

Following the definition of SHERPA‘s RoMEO project, pre-prints refer to the version before peer-review organized by a publisher.

What is a post-print?

Following the RoMEO definitions, a post-print is a final draft as prepared by the authors themselves after reviewing. Thus, feedback from the reviewers has typically been included.

Here a publisher may have had some light involvement, for example by selecting the reviewers, making a reviewing system available, or by offering a formatting template / style sheet. The post-print, however, is author-prepared, so copy-editing and final markup by the publisher has not been done.

A (Plan S) synonym for postprint is “Author-Accepted Manuscript”, sometimes abbreviated as AAM.

What is a publisher’s version?

While pre- and post-prints are author-prepared, the final publisher’s version is created by the publisher.

The publishers involvement may vary from very little (camera ready version entirely created by authors) up to substantial (proof reading, new markup, copy editing, etc.).

Publishers typically make their versions available after a transfer of copyright, from the authors to the publisher. And with the copyright owned by the publisher, it is the publisher who determines not only where the publisher’s version can be made available, but also where the original author-prepared pre- or post-prints can be made available.

A (Plan S) synonym is “Version of Record”, sometimes abbreviated as VoR.

Do publishers allow Green Open Access?

Self-archiving of non-published material that you own the copyright to is always allowed.

Whether self-archiving of a paper that has been accepted by a publisher for publication is allowed depends on that publisher. You have transferred your copyright, so it is up to the publisher to decide who else can publish it as well.

Different publishers have different policies, and these policies may in turn differ per journal. Furthermore, the policies may vary over time.

The SHERPA project does a great job in keeping track of the open access status of many journals. You’ll need to check the status of your journal, and if it is green you can self-archive your paper (usually under certain publisher-specific conditions).

In the RoMEO definition, green open access means that authors can self-archive both pre-prints and post-prints.

Under what conditions is Green Open Access permitted?

Since the publisher holds copyright on your published paper, it can (and usually does) impose constraints on the self-archived versions. You should always check the specific constraints for your journal or publisher, for example via the RoMEO journal list.

The following conditions are fairly common:

  1. You generally can self-archive pre- and post-prints only, but not the publisher version.

  2. In the meta-data of the self-archived version you need to add a reference to the final version (for example through its DOI).

  3. In the meta-data of the self-archived version you need to include a statement of the current ownership of the copyright, sometimes through specific sentences that must be copy-pasted.

  4. The repository in which you self-archive should be non-commercial. Thus, arXiv and institutional repositories are usually permitted, but commercial ones like PeerJ Preprints, Academia.edu or ResearchGate are not.

  5. Some commercial publishers impose an embargo on post-prints. For example Elsevier permits sharing the post-print version on an institutional repository only after 12-24 months (depending on the journal).

Usually meeting the demands of a single publisher is relatively easy to do. Given points 2 and 3, it typically involves creating a dedicated pdf with a footnote on the first page with the required extra information.

However, every publisher has its own rules. If you publish your papers in a range of different venues (which is what good researchers do), you’ll have to know many different rules if you want to do green open access in the correct way.

What is Yellow Open Access?

Some publishers (such as Wiley) allow self-archiving of pre-prints only, and not of post-prints. This is referred to as yellow open access in RoMEO. Yellow is more restrictive than green.

As an author, I find yellow open access frustrating, as it forbids me to make the version of my paper that was improved thanks to the reviewers available via open access.

As a reviewer, I feel yellow open access wastes my effort: I tried to help authors by giving useful feedback, and the publisher forbids my improvements to be reflected in the open access version.

What is Gold Open Access?

Gold Open Access refers to journals (or conference proceedings) that are completely accessible to the public without requiring paid subscriptions.

Often, gold implies green, for example when a publisher such as PeerJ, PLOS ONE or LIPIcs adopts a Creative Commons license — which allows anyone, including the authors, to share a copy under the condition of proper attribution.

The funding model for open access is usually not based on subscriptions, but on Article Processing Charges, i.e., a payment by the authors for each article they publish (varying between $70 (LIPIcs) up to $1500 (PLOS ONE) per paper).

What is Hybrid Open Access?

Hybrid open access refers to a restricted (subscription-funded) journal that permits authors to pay extra to make their own paper available as open access.

This practice is also referred to as double dipping: The publisher catches revenues from both subscriptions and author processing charges.

University libraries and funding agencies do not like hybrid access, since they feel they have to pay twice, both for the authors and the readers.

Green open access is better than hybrid open access, simply because it achieves the same (an article is available) yet at lower costs.

What are the Self-Archiving policies of common computer science venues?

For your and my convenience, here is the green status of some publishers that are common in software engineering (check links for most up to date information):

  • ACM: Green, e.g., TOSEM, see also the ACM author rights. For ACM conferences, often the author-prepared camera-ready version includes a DOI already, making it easy to adhere to ACM’s meta-data requirements. Note that some ACM conference are gold open access, for example the ones published in the Proceedings of the ACM on Programming Languages.
  • IEEE: Green, e.g., TSE. The IEEE has a policy that the IEEE makes a version available that meets all IEEE meta-data requirements, and that authors can use for self-archiving. See also their self-archiving FAQ.
  • Springer: Green, e.g., EMSE, SoSyM, LNCS. Pre-print on arXiv, post-print on personal page immediately and in repository in some cases immediately and in others after a 12 month embargo period.
  • Elsevier: Mostly green, e.g., JSS, IST. Pre-print allowed; post-print with CC BY-NC-ND license on personal page immediately and in institutional repository after 12-48 month embargo period. To circumvent the embargo you can publish the pre-print on arxiv, update it with the post-print (which is permitted), and update the license to CC BY-NC-ND as required by Elsevier, after which anyone (including you) can share the postprint on any non-commercial platform.
  • Wiley: Mostly yellow, i.e., only pre-prints can be immediately shared, and post-prints (even on personal pages) only after 12 month embargo. E.g. JSEP.

Luckily, there are also some golden open access publishers (which typically permit self-archiving as well should you still want that):

Is Green Open Access compulsory?

Funding agencies (NWO, EU, Bill and Melinda Gates Foundation, …) as well as universities (TU Delft, University of California, UCL, ETH Zurich, Imperial College, …) are increasingly demanding that all publications resulting from their projects or employees are available in open access.

My own university TU Delft insists, like many others, on green open access:

As of 1 May 2016 the so-called Green Road to Open Access publishing is mandatory for all (co)authors at TU Delft. The (co)author must publish the final accepted author’s version of a peer-reviewed article with the required metadata in the TU Delft Institutional Repository.

This makes sense: The TU Delft wants to have copies of all the papers that its employees produce, and make sure that the TU Delft stakeholders, i.e. the Dutch citizens, can access all results. Note that TU Delft insists on post-prints that include reviewer-induced modifications.

The Dutch national science foundation NWO has a preference for gold open access, but accepts green open access if that’s impossible (“Encourage Gold, require immediate Green“).

Should I share my pre-print under a Creative Commons license?

You should only do this if you are certain that the publisher’s conditions on self-archiving pre-prints are compatible with a Creative Commons license. If that is the case, you probably are dealing with a golden open access publisher anyway.

Creative Commons licenses are very liberal, allowing anyone to re-distribute (copy) the licensed work (under certain conditions, including proper attribution).

This effectively nullifies (some of) the rights that come with copyright. For that reason, publishers that insist on owning the full copyright to the papers they publish typically disallow self-archiving earlier versions with such a license.

For example, ACM Computing Surveys insists on a set statement indicating

… © ACM, YYYY. This is the author’s version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution…

This “not for redistribution” is incompatible with Creative Commons, which is all about sharing.

Furthermore, a Creative Commons license is irrevocable. So once you picked it for your pre-print, you effectively made a choice for golden open access publishers only (some people might consider this desirable, but it seriously limits your options).

Therefore, my suggestion would be to keep the copyright yourself for as long as you can, giving you the freedom to switch to Creative Commons once you know who your publisher is.

Can I use Green Open Access to Comply with Plan S?

Yes, you can, but you are only compliant with Plan S if you share your postprint, with a Creative Commons License, immediately (no embargo).

But, unfortunately, the creative commons license is likely incompatible with the constraints of your publisher of the eventual paper. As a way around, in some (most) cases (e.g., ACM, IEEE journals, Springer) you are allowed to distribute your postprint with a CC BY license if you actually pay the hybrid open access fee. These fees are not refundable under Plan S, but this hybrid-and-then-self-archive route is compliant with Plan S.

What is a good place for self-archiving?

It depends on your needs.

Your employer may require that you use your institutional repository (such as the TU Delft Repository). This helps your employer to keep track of how many of its publications are available as open access. The higher this number, the stronger the position of your employer when negotiating open access deals with publishers. Institutional archiving still allows you to post a version elsewhere as well.

Subject repositories such as arXiv offer good visibility to your peers. In fields like physics using arXiv is very common, whereas in Computer Science this is less so. A good thing about arXiv is that it permits versioning, making it possible to submit a pre-print first, which can then later be extended with the post-print. You can use several licenses. If you intend publishing your paper, however, you should adopt arXiv’s Non-Exclusive Distribution license (which just allows arXiv to distribute the paper) instead of the more generous Creative Commons license — which would likely conflict with the copyright claims of the publisher of the refereed paper.

Your personal home page is a good place if you want to offer an overview of your own research. Home page URLs may not be very permanent though, so as an approach to self archiving it is not suitable. You can use it in addition to archiving in repositories, but not as a replacement.

Can I use PeerJ Preprints for Self-Archiving?

Probably not — and it’s also not what PeerJ Preprints are intended for.

PeerJ Preprints is a commercial eprint server requiring a Creative Commons license. It is intended to share drafts that have not yet been peer reviewed for formal publication.

It offers good visibility (a preprint on goto statements attracted 15,000 views), and a smooth user interface for posting comments and receiving feedback. Articles can not be removed once uploaded.

The PeerJ Preprint service is compatible with other golden open access publishers (such as PeerJ itself or Usenix).

The PeerJ Preprint service, however, is incompatible with most other publishers (such as ACM, IEEE, or Springer) because (1) the service is commercial; (2) the service requires Creative Commons as license; (3) preprints once posted cannot be removed.

So, if you want to abide with the rules, uploading a pre-print to PeerJ Preprints severely limits your subsequent publication options.

Can I use ResearchGate or Academia.edu for Self-Archiving?

No — unless you only work with liberal publishers with permissive licenses such as Creative Commons.

ResearchGate and Academia.edu are researcher social networks that also offer self-archiving features. As they are commercial repositories, most publishers will not allow sharing your paper on these networks.

The ResearchGate copyright pages provide useful information on this.

The Academia.edu copyright pages state the following:

Many journals will also allow an author to retain rights to all pre-publication drafts of his or her published work, which permits the author to post a pre-publication version of the work on Academia.edu. According to Sherpa, which tracks journal publishers’ approach to copyright, 90% of journals allow uploading of either the pre-print or the post-print of your paper.

This seems misleading to me: Most publishers explicitly dis-allow posting preprints to commercial repositories such as Academia.edu.

In both cases, the safer route is to use permitted places such as your home page or institutional repository for self-archiving, and only share links to your papers with ResearchGate or Academia.edu.

Which version(s) should I self-archive?

It depends.

Publishing a pre-print as soon as it is ready has several advantages:

  • You can receive rapid feedback on a version that is available early.

  • You can extend your pre-print with an appendix, containing material (e.g., experimental data) that does not fit in a paper that you’d submit to a journal

  • It allows you to claim ownership of certain ideas before your competition.

  • You offer most value to society since you allow anyone to benefit as early as possible from your hard work

Nevertheless, publishing a post-print only can also make sense:

  • You may want to keep some results or data secret from your competition until your paper is actually accepted for publication.

  • You may want to avoid confusion between different versions (pre-print versus post-print).

  • You may be scared to leave a trail of rejected versions submitted to different venues.

  • You may want to submit your pre-print to a venue adopting double blind reviewing, requiring you to remain anonymous as author. Publishing your pre-print during the reviewing phase would make it easy for reviewers to find your paper and connect your name to it.

For these reasons, and primarily to avoid confusion, I typically share just the post-print: The camera-ready version that I create and submit to the publisher is also the version that I self-archive as post-print.

What does Gold Open Access add to Green Open Access?

For open access, gold is better than green since:

  • it removes the burden of making articles publicly available from the researcher to the publisher.
  • it places a paper in a venue that is entirely open access. Thus, also other papers improving upon, or referring to your paper (published in the same journal) will be open access too.
  • gold typically implies green, i.e., the license of the journal is similar to Creative Commons, allowing anyone, including the authors, to share a copy under the condition of proper attribution.

Will Green Open Access hurt commercial publishers?

Maybe. But most academic publishers already allow green open access, and they are doing just fine. So I would not worry about it.

What is the greenest publisher in computer science?

The greenest publisher should be the one imposing the least restrictions on self-archiving.

From that perspective, publishers who want to be the greenest should in fact want to be gold, making their papers available under a permissive Creative Commons license. An example is Usenix.

Among the non-golden publishers, the greenest are probably the non-commercial ones, such as IEEE and ACM: They require simple conditions that are usually easy to meet.

The ACM, “the world’s largest educational and scientific computing society”, claims to be among the “greenest” publishers. Based on their tolerant attitude towards self-archiving of post-prints this may be somewhat justified. Furthermore, their Authorizer mechanism permits setting up free access to the publisher’s version.

But greenest is gold. So I look forward to the day the ACM follows its little sister Usenix in a full embrace of golden open access.

Should I use ACM Authorizer for Self-Archiving?

The ACM offers the Authorizer mechanism to provide free access to the Publisher’s Version of a paper, which only works from one user-specified URL. For example, I can use it to create a dedicated link from my institutional paper page to the publisher’s version.

However, Authorizer links cannot be accessed from other pages, and there is no point in emailing or tweeting them. Since only one authorizer link can exist per paper, I cannot use an authorizer link for both my institutional repository, and for the repository of my funding agency.

These restrictions on Authorizer links make them unsuitable as a replacement for self-archiving (let alone as a replacement for golden open access).

As a conference organizer, can I mandate Green Open Access?

Green open access is self-archiving, giving the authors the permission to archive their own papers.

As a conference organizer working with a non open access (ACM, IEEE, Springer-Verlag) publisher, you are not allowed to archive and distribute all the papers of the conference yourself.

OOSPLA program with DOI and preprint link

What several conferences do instead, though, is collecting links to pre- or post-prints. For example, the on line program of the recent OOPSLA 2016 conference has links to both the publisher’s version (through a DOI) and to an author-provided post-print.

For OOPSLA, 20 out of the 52 (38%) of the authors provided such a link to their paper, a number that is similar in other conferences adopting such preprint linking.

As a conference organizer, you can do your best to encourage authors to submit their pre-print links. Or you can use your influence in the steering committee to push the conference to switch to an open access publisher, such as LIPIcs or Usenix.

As an author, you can help by actually offering a link to your pre-print.

What does Green Open Access cost?

For authors, green open access typically costs no money. University repositories, arXiv, and PeerJ Preprints are all free to use.

It does cost (a bit of) effort though:

  • You need to find out the specific conditions under which the publisher of your current paper permits self-archiving.
  • You need to actually upload your paper to some repository, provide the correct meta-data, and meet the publisher’s constraints.

The fact that open access is free for authors does not mean that there are no costs involved. For example, the money to keep arXiv up and running comes from a series of sponsors, including TU Delft.

Should I adopt Green Open Access?

Yes.

Better availability of your papers will help you in several ways:

  • Impact in Research: Other researchers can access your papers more easily, increasing the chances that they will build upon your results in their work;
  • Impact in Practice: Practitioners may be interested in using your results: A pay-wall is an extra and undesirable impediment for such adoption;
  • Improved Results: Increased usage of your results in either industry or academia will put your results to the real test, and will help you improve your results.

Besides that, (green) open access is a way of delivering to the tax payers what they paid for: Your research results.

Where can I learn more about Green Open Access?

Useful resources include:


Version history:

  • 6 November 2016: Version 0.1, Initial version, call for feedback.
  • 14 November 2016: Version 0.2, update on commercial repositories.
  • 18 November 2016: Version 0.3, update on ACM Authorizer.
  • 20 November 2016: Version 0.4, added TOC, update on commercial repositories.
  • 06 December 2016: Version 0.5, updated information on ACM and IEEE.
  • 20 December 2016: Version 0.6, added info on Creative Commons and AI venues.
  • 27 July 2018: Version 0.7, update on where to archive. Released as CC BY-SA 4.0.
  • 18 November 2018: Version 0.8, updated info on Elsevier.
  • 10 September, 2019: Version 0.9, added question on Plan S compliance.

Acknowledgments: I thank Moritz Beller (TU Delft) and Dirk Beyer (LMU Munich) for valuable feedback and corrections.

© Arie van Deursen, November 2016. Licensed under CC BY-SA 4.0.

PhD Student Vacancy in Test Amplification

Within the Software Engineering Research Group of Delft University of Technology, we are looking for an enthusiastic and strong PhD student in the area of “test amplification”.

The PhD project will be in the context of the new STAMP project funded by the H2020 programme of the European Union.

STAMP is a 3-year R&D project, which leverages advanced research in automatic test generation to push automation in DevOps one step further through innovative methods of test amplification. It will reuse existing assets (test cases, API descriptions, dependency models), in order to generate more test cases and test configurations each time the application is updated. This project has an ambitious agenda towards industry transfer. In this regard, the STAMP project gathers 3 research groups which have strong expertise in software testing and continuous development as well as 6 industry partners that develop innovative open source software products.

The STAMP project is led by Benoit Baudry from INRIA, France. The STAMP consortium consists of the following partners

The PhD student employed by Delft University of Technology will conduct research as part of the STAMP project together with the STAMP partners. Employment will be for a period of four years. The PhD student will enroll in the TU Delft Graduate School.

The primary line of research for the TU Delft PhD student will revolve around runtime test amplification. Online test amplification automatically extracts information from logs collected in production in order to generate new tests that can replicate failures, crashes, anomalies and outlier events. The research will be devoted to (i) defining monitoring techniques and log data analytics to collect run-time information; (ii) detecting interesting behaviors with respect to existing tests; (iii) creating new tests for testing the behaviors of interest, for example through state machine learning or genetic algorithms; (iv) adding new probes and new log messages into the production code to improve its testability.

stamp-wps

Besides this primary line of research, the PhD student will be involved in lines of research led by the other STAMP partners, addressing unit test amplification and configurability test amplification. Furthermore, the PhD student will be involved in case studies and evaluations conducted in collaboration with the industrial partners in the consortium.

From the TU Delft Software Engineering group, several people will be involved, including Arie van Deursen (principal investigator), Andy Zaidman, and Mauricio Aniche. Furthermore, where possible collaborations with existing projects will be setup, such as the 3TU Big Software on the Run and TestRoot projects.

Requirements for the PhD candidate include:

  • Being a team player;
  • Strong writing and presentation skills;
  • Being hungry for new knowledge in software engineering;
  • Ability to develop prototype research tools;
  • Interest in bringing program analysis, testing, and genetic algorithms together;
  • Eagerness to work with the STAMP partners on test amplification in their contexts;
  • Completed MSc degree in computer science

For more information on this vacancy and the STAMP project, please contact Arie van Deursen.

To apply, please follow the instructions of the official opening at the TU Delft Vacancies pages. Your letter letter should include a clear motivation why you want to work on the STAMP project, and an explanation of what you can bring to the STAMP project. Also provide your CV, (pointers to) written material (e.g. a term paper, an MSc thesis, or published conference or journal papers), and if possible pointers to (open source) software projects you have contributed to.

The vacancy will be open until 2 February 2017, but applying early never hurts. We look forward to receiving your application!

Embedded Software Development with C Language Extensions

Arie van Deursen, with Markus Voelter, Bernd Kolb, and Stephan Eberle.

In embedded systems development, C remains the dominant programming language, because it permits writing low level algorithms and producing efficient binaries. Unfortunately, the price to pay for this is limited support for explicit and safe abstractions.

To overcome this, engineers at itemis and fortiss created mbeddr: an extensible version of C that comes with extensions relevant to embedded software development. Examples include explicit support for state machines, variability management, physical units, interfaces and components, or unit testing. The extensions are supported by an IDE created through JetBrains MPS. Furthermore, mbeddr users can introduce their own extensions.

To me, the ideas under mbeddr are extremely appealing. But I also had concerns: Would this work in practice? Does this scale to real world embedded systems? What are the benefits of such an approach? What are the problems?

Therefore, when Markus Voelter, lead architect of mbeddr invited me to join in a critical evaluation of a system created with mbeddr that they just shipped, I happily accepted. Eventually, this resulted in our paper Using C Language Extensions for Developing Embedded Software: A Case Study, which was accepted for publication and presentation at OOPSLA 2015.

The subject system built with mbeddr is an electricity smart meter, which continuously senses the instantaneous voltage and current on a mains line using analog front ends and analog-to-digital converters. It’s mbeddr implementation consists of 80 interfaces and 167 components, corresponding to roughly 44,000 lines of C code.

Main layers, sub-systems, and components of the smart metering system.

Main layers, sub-systems, and components of the smart metering system.

Our goal in analyzing this system was to find out the degree to which C language extensions (as implemented in mbeddr) are useful for developing embedded software. We adopted the case study research method to investigate the use of mbeddr in an actual commercial project, since the true risks and benefits of language extensions can be observed only in such projects. Focussing on a single case allows us to provide significant details about that case.

To achieve this goal, we investigated the following aspects of the smart metering system:

  1. Complexity: Are the abstractions provided by mbeddr beneficial for mastering the complexity encountered in a real-world embedded system? Which additional abstractions would be needed or useful?
  2. Testing: Can the mbeddr extensions help with testing the system? In particular, is hardware-independent testing possible to support automated, continuous integration and build? Is incremental integration and commissioning supported?
  3. Overhead: Is the low-level C code generated from the mbeddr extensions efficient enough for it to be deployable onto a real-world embedded device?
  4. Effort: How much effort is required for developing embedded software with mbeddr?

The detailed analysis and answers are in the paper. Our main findings are the following:

  • The extensions help mastering complexity and lead to software that is more testable, easier to integrate and commission, and that is more evolvable.
  • Despite the abstractions introduced by mbeddr, the additional overhead is very low and acceptable in practice.
  • The development effort is reduced, particularly regarding evolution and commissioning.

In our paper, we also devote four pages to potential threats to the validity of our findings. Most importantly, in our experience with this case study and other projects, introducing mbeddr into an organization may be difficult, despite these benefits, due to a lack of developer skills and the need to adapt the development process.

The key insight for me is that mbeddr can help bring down one of the biggest cost and risk factors in embedded systems development, which is the integration and commissioning on the target hardware. Typically, this phase accounts for 40-50% of the project cost; for the smart meter system this was 13%. This was achieved by extensive unit and integration testing, using interfaces that could be instantiated both in a test as well as a target hardware environment.

Continuous integration is not just about the use of a continuous integration server. It is primarily about carefully modularizing the system into components that can be tested independently in different environments. Unfortunately, modularization is hard, especially in languages without explicit modularization primitives. Our study shows how extending C with language constructs can help to devise a modular, testable architecture, substantially reducing integration and commissioning costs.

For more information, see:

  • Markus Völter, Arie van Deursen, Bernd Kolb, Stephan Eberle. Using C Language Extensions for Developing Embedded Software: A Case Study. OOPSLA/SPLASH 2015 (pdf).
  • Presentation at OOSPLA 2015 by Markus Voelter (youtube, slides)
  • Information on this paper at the OOPSLA program pages.

Delft Technology Fellowship for Top Female (Computer) Scientists

TU Delft Logo

Delft University of Technology is aiming to substantially increase the number of top female faculty members. To help accelerate this, the Delft Technology Fellowship offers high-profile, tenure-track positions to top female scientists in research fields in which Delft University of Technology (TU Delft) is active.

One of those fields is of course Computer Science — so if you’re a female computer scientist (or software engineering researcher!) interested in working as an assistant, associate or even full professor (depending on your experience) at the departments of Computer Science and Engineering of the TU Delft Faculty of Electrical Engineering, Mathematics, and Computer Science (EEMCS), please consider applying.

Previous rounds of the TU Delft Fellowship program were held in 2012 and 2014. In both years, 9 top scientists were hired, in such diverse fields as interactive media design, protein machines, solid state physics, climate change, and more.

Since applicants can come from any field of research, the competition for the TU Delft fellowship program is fierce. The program is highly international, with just four out of the current 18 fellows from The Netherlands. As a fellow, you should be the best in your field, and you should be able to explain to non computer scientists what makes you so good.

As a Delft Technology Fellow, you can propose your own research program. As in previous years, it can be in any research field in which TU Delft is active, such as computer science.

The computer science and engineering research at TU Delft is organized into 12 so-called sections, covering such topics as algorithmics, embedded software, cyber security, pattern recognition, and my own topic software engineering. Each section consists of around four faculty members and 10-15 PhD students, and is typically headed by one full professor. PhD students are usually externally funded, through government subsidies obtained in competition, or via collaborations with industry.

As a fellow at the EEMCS faculty, you are expected to bring your own topic. You would, however, typically be working within one of the existing sections. Thus, if you apply, it makes sense to identify the section that is most related your area of work, and explore if you see collaboration opportunities. To that end, you can contact any of the section leaders, or me if you want to discuss where your topic would fit best. Naturally, if you are in software engineering, also feel free to contact me, or any current SERG group member.

For formal instructions on how to apply, please consult the Fellowship web site. The application procedure is open from 12 October 2015 until 8 January 2016.

PhD/PostDoc Vacancies in Persistent Code Reviews

logo-nwo

In the fall 2015 we are starting a brand new project that we titled Persistent Code Reviewing, funded by NWO. If you’re into code reviews, software quality, or software testing, please consider applying for a position as PhD student or Postdoc within this project.

To quote the abstract of the project proposal:

Code review is the manual assessment of source code by human reviewers. It is mainly intended to identify defects and quality problems in code changes before deployment in production. Code review is widely recommended: Several studies have shown that it supports software quality and reliability crucially. Properly doing code reviews requires expensive developer time and zeal, for each and every reviewed change.

The goal of “Persistent Code Reviews” project is to make the efforts and knowledge that reviewers put in a code review available outside the code change context to which they are directed.

Naturally, given my long term interest in software testing, we will include any test activities (test design and execution, test adequacy considerations) that affect the reviewing process in our analysis.

The project is funded by the Top Programme of NWO, the Netherlands Organization for Scientific Research.

Within the project, we have openings for two PhD students and one postdoctoral researcher. The research will be conducted at the Software Engineering Research Group (SERG) of Delft University of Technology in The Netherlands. At SERG, you will be working in a team of around 25 researchers, including 6 full time faculty members.

In this project you will be supervised by Alberto Bacchelli and myself. To learn more about any of these positions, please contact one of us.

Requirements for all positions include:

  • Being a team player;
  • Strong writing and presentation skills;
  • Being hungry for new knowledge in software engineering;
  • Ability to develop prototype research tools;
  • Interest in bringing program analysis, testing, and human aspects of software engineering together.

To apply, please send us an application letter, a CV, and (pointers) to written material (e.g. a term paper or an MSc thesis for applicants for the PhD positions, and published papers or the PhD thesis for the postdoc).

We are in the process of further distributing this announcement: Final decisions on the appointments will be made end of October.

We look forward to receiving your application as soon as possible.

In Vivo Software Analytics: PhD/Postdoc positions

Last week, we had the kickoff of a new project we are participating in addressing “In Vivo Software Analytics”. In this project, called “Big Software on the Run” (BSR) we monitor the quality of software in its “natural habitat”, i.e., as it is running in the wild. The project is a collaboration between the three technical universities (3TU) of The Netherlands (Eindhoven, Twente, Delft).

In Vivo Software Analytics

To quote the 3TU.BSR plan:

Millions of lines of code – written in different languages by different people at different times, and operating on a variety of platforms – drive the systems performing key processes in our society. The resulting software needs to evolve and can no longer be controlled a priori as is illustrated by a range of software problems. The 3TU.BSR research program will develop novel techniques and tools to analyze software systems in vivo – making it possible to visualize behavior, create models, check conformance, predict problems, and recommend corrective actions.

Essentially, we propose to address big software by applying big data techniques to system health information obtained at run time. It provides feedback from operations to developers, in order to make systems more resilient against the risks that come with rapid change.

The project brings together some of the best softare engineering and data science groups and researchers of the three technical universities in The Netherlands:

The project is sponsored by NIRICT, the 3TU center for Netherlands Resaerch in Information and Communication Technology.

The project duration is four years. At each of the three technical universities two PhD students and one one postdoc will be employed. To maxize collaboration, each PhD student has two supervisors, from two different universities. Furthermore, the full research team, including all supervisors, PhD students, and postdocs, will regularly visit each other.

Within the Delft Software Engineering Research Group, we are searching for one PhD student and one postdoc to strengthen the 3TU.BSR project team.

The PhD student we are looking for will work on the intersection between visualization and dynamic program analysis. In particular, we are searching for a PhD student to work on log event analysis, and visualization of anomalies and exceptions as occurring in traces of running systems. The PhD student will be jointly supervised by Jack van Wijk and myself.

The postdoctoral researcher we are looking for should be able to establish connections between the various research themes and groups working on the project (such as visualization, process mining, repository mining, privacy-preserving log file analysis, model checking). Thus, we are looking for a researcher who successfully completed his or her PhD thesis, and is open to work with various of the six PhD students within the project. The postdoc will be based in the Software Engineering Research Group.

Requirements for both positions include:

  • Being a team player;
  • Strong writing and presentation skills;
  • Being hungry for new knowledge in software engineering;
  • Ability to develop prototype research tools;
  • Interest in bringing visualization, run time analysis, and human aspects of software engineering together.

To apply, please send me an application letter, a CV, and (pointers) to written material (e.g. a term paper or an MSc thesis for applicants for the PhD position, and published papers or the PhD thesis for the postdoc).

We are in the process of further distributing this announcement: Final decisions on the appointments will be made end of October.

I look forward to receiving your application!

3TU.BSR Tracks

Think Twice Before Using the “Maintainability Index”

Code metrics results in VS2010

This is a quick note about the “Maintainability Index”, a metric aimed at assessing software maintainability, as I recently run into developers and researchers who are (still) using it.

The Maintainability Index was introduced at the International Conference on Software Maintenance in 1992. To date, it is included in Visual Studio (since 2007), in the recent (2012) JSComplexity and Radon metrics reporters for Javascript and Python, and in older metric tool suites such as verifysoft.

At first sight, this sounds like a great success of knowledge transfer from academic research to industry practice. Upon closer inspection, the Maintainability Index turns out to be problematic.

The Original Index

The Maintainabilty Index was introduced in 1992 by Paul Oman and Jack Hagemeister, originally presented at the International Conference on Software Maintenance ICSM 1992 and later refined in a paper that appeared in IEEE Computer. It is a blend of several metrics, including Halstead’s Volume (HV), McCabe’s cylcomatic complexity (CC), lines of code (LOC), and percentage of comments (COM). For these metrics, the average per module is taken, and combined into a single formula:

formula

To arrive at this formula, Oman and Hagemeister started with a number of systems from Hewlett-Packard (written in C and Pacscal in the late 80s, “ranging in size from 1000 to 10,000 lines of code”). For each system, engineers provided a rating (between 1-100) of its maintainability. Subsequently, 40 different metrics were calculated for these systems. Finally, statistical regression analysis was applied to find the best way to combine (a selection of) these metrics to fit the experts’ opinion. This eventually resulted in the given formula. The higher its value, the more maintainable a system is deemed to be.

The maintainability index attracted quite some attention, also because the Software Engineering Institute (SEI) promoted it, for example in their 1997 C4 Software Technology Reference Guide. This report describes the Maintainability Index as “good and sufficient predictors of maintainability”, and “potentially very useful for operational Department of Defense systems”. Furthermore, they suggest that “it is advisable to test the coefficients for proper fit with each major system to which the MI is applied.”

Use in Visual Studio

Visual Studio Code Metrics were announced in February 2007. A November 2007 blogpost clarifies the specifics of the maintainability index included in it. The formula Visual Studio uses is slightly different, based on the 1994 version:

Maintainability Index =
  MAX(0, (171 - 5.2 * ln(Halstead Volume)
             - 0.23 * Cyclomatic Complexity
             - 16.2 * ln(Lines of Code)
         ) * 100 / 171)

As you can see, the constants are literally the same as in the original formula. The new definition merely transforms the index to a number between 0 and 100. Also, the comment metrics has been removed.

Furthermore, Visual Studio provides an interpretation:

MI >= 20 High Maintainabillity
10 <= MI < 20 Moderate Maintainability
MI < 10 Low Maintainability

 

I have not been able to find a justification for these thresholds. The 1994 IEEE Computer paper used 85 and 65 (instead of 20 and 10) as thresholds, describing them as a good “rule of thumb”.

The metrics are available within Visual Studio, and are part of the code metrics power tools, which can also be used in a continuous integration server.

Concerns

I encountered the Maintainability Index myself in 2003, when working on Software Risk Assessments in collaboration with SIG. Later, researchers from SIG published a thorough analysis of the Maintainability Index (first when introducing their practical model for assessing maintainability and later as section 6.1 of their paper on technical quality and issue resolution).

Based on this, my key concerns about the Maintainability Index are:

  1. There is no clear explanation for the specific derived formula.
  2. The only explanation that can be given is that all underlying metrics (Halstead, Cyclomatic Complexity, Lines of Code) are directly correlated with size (lines of code). But then just measuring lines of code and taking the average per module is a much simpler metric.
  3. The Maintainability Index is based on the average per file of, e.g., cyclomatic complexity. However, as emphasized by Heitlager et al, these metrics follow a power law, and taking the average tends to mask the presence of high-risk parts.
  4. The set of programs used to derive the metric and evaluate it was small, and contained small programs only.
  5. Furthermore, the programs were written in C and Pascal, which may have rather different maintainability characteristics than current object-oriented languages such as C#, Java, or Javascript.
  6. For the experiments conducted, only few programs were analyzed, and no statistical significance was reported. Thus, the results might as well be due to chance.
  7. Tool smiths and vendors used the exact same formula and coefficients as the 1994 experiments, without any recalibration.

One could argue that any of these concerns is reason enough not to use the Maintainability Index.

These concerns are consistent with a recent (2012) empirical study, in which one application was independently built by four different companies. The researchers used these systems two compare maintainability and several metrics, including the Maintainability Index. Their findings include that size as a measure of maintainability has been underrated, and that the “sophisticated” maintenance metrics are overrated.

Think Twice

In summary, if you are a researcher, think twice before using the maintainability index in your experiments. Make sure you study and fully understand the original papers published about it.

If you are a tool smith or tool vendor, there is not much point in having several metrics that are all confounded by size. Check correlations between the metrics you offer, and if any of them are strongly correlated pick the one with the clearest and simplest explanation.

Last but not least, if you are a developer, and are wondering whether to use the Maintainability Index: Most likely, you’ll be better off looking at lines of code, as it gives easier to understand information on maintainability than a formula computed over averaged metrics confounded by size.

Further Reading

  1. Paul Omand and Jack Hagemeister. “Metrics for assessing a software system’s maintainability”. Proceedings International Conference on Software Mainatenance (ICSM), 1992, pp 337-344. (doi)
  2. Paul W. Oman, Jack R. Hagemeister: Construction and testing of polynomials predicting software maintainability. Journal of Systems and Software 24(3), 1994, pp. 251-266. (doi).
  3. Don M. Coleman, Dan Ash, Bruce Lowther, Paul W. Oman. Using Metrics to Evaluate Software System Maintainability. IEEE Computer 27(8), 1994, pp. 44-49. (doi, postprint)
  4. Kurt Welker. The Software Maintainability Index Revisited. CrossTalk, August 2001, pp 18-21. (pdf)
  5. Maintainability Index Range and Meaning. Code Analysis Team Blog, blogs.msdn, 20 November 2007.
  6. Ilja Heitlager, Tobias Kuipers, Joost Visser. A practical model for measuring maintainability. Proceedings 6th International Conference on the Quality of Information and Communications Technology, 2007. QUATIC 2007. (scholar)
  7. Dennis Bijlsma, Miguel Alexandre Ferreira, Bart Luijten, and Joost Visser. Faster Issue Resolution with Higher Technical Quality of Software. Software Quality Journal 20(2): 265-285 (2012). (doi, preprint). Page 14 addresses the Maintainability Index.
  8. Khaled El Emam, Saida Benlarbi, Nishith Goel, and Shesh N. Rai. The Confounding Effect of Class Size on the Validity of Object-Oriented Metrics. IEEE Transactions on Software Engineering, 27(7):630:650, 2001. (doi, preprint)
  9. Dag Sjøberg, Bente Anda, and Audris Mockus. Questioning software maintenance metrics: a comparative case study. Proceedings of the ACM-IEEE international symposium on Empirical software engineering and measurement (ESEM), 2012, pp. 107-110. (doi, postprint).
Edit September 2014

Included discussion on Sjøberg’s paper, the thresholds in Visual Studio, and the problems following from averaging in a power law.


© Arie van Deursen, August 2014.

Dimensions of Innovation

ECSS Amsterdam

As an academic in software engineering, I want to make the engineering of software more effective so that society can benefit even more from the amazing potential of software.

This requires not just good research, but also success in innovation: ensuring that research ideas are adopted in society. Why is innovation so hard? How can we increase innovation in software engineering and computer science alike? What can universities do to increase their innovation success rate?

I was made to rethink these questions when Informatics Europe contacted me to co-organize this year’s European Computer Science Summit (ECSS 2013) with special theme Informatics and Innovation.

Informatics Europe is an organization of Computer Science Departments in Europe, founded in 2005. Its mission is to foster the development of quality research and teaching in information and computer sciences, also known as Informatics. In its yearly summit, deans and heads of department get together, to share experience in leading academic groups, and to join forces when undertaking new activities at a European level.

When Informatics Europe asked me as program chair of their two day summit, my only answer could be “yes”. I have a long term interest in innovation, and here I had the opportunity and freedom to compile a full featured program on innovation as I saw fit, with various keynotes, panels, and presentations by participants — a wonderful assignment.

In compiling the program I took the words of Peter Denning as starting point: “an idea that changes no one’s behavior is only an invention, not an innovation.” In other words, innovation is about changing people, which is much harder than coming up with a clever idea.

In the end, I came up with the following “Dimensions of Innovations” that guided the composition of the program.

  1. Startups

    Innovation needs optimists who believe they can change the world. One of the best ways to bring a crazy new idea to sustainable success is by starting a new company dedicated to creating and conquering markets that had not existed before.

    Many of the activities at ECSS 2013 relate to startups. The opening keynote is by François Bancilhon, serial entrepreneur currently working in the field of open data. Furthermore, we have Heleen Kist, strategy consultant in the area of access to finance. Last but not least, the first day includes a panel, specifically devoted to entrepreneurship, and one of the pre-summit workshops is entirely devoted to entrepreneurship for faculty.

  2. Patents

    Patents are traditionally used to protect (possibly large) investments that may be required for successful innovation. In computer science, patents are more than problematic, as evidenced by patent trolls, fights between giants such as Oracle and Google, and the differences in regulations in the US and in Europe. Yet at the same time (software) patents can be crucial, for example to attract investors for a startup.

    Several of the ECSS keynote speakers have concrete experience with patents — Pamela Zave at AT&T, and Erik Meijer from his time at Microsoft, when he co-authored hundreds of patents. Furthermore, Yannis Skulikaris of the European Patent Office will survey patenting of software-related inventions.

  3. Open Source, Open Data

    An often overlooked dimension of innovation are open source and open data. How much money can be made by giving away software, or by allowing other to freely use your data? Yet, many enterprises are immensely successful based on open source and open data.

    At ECSS, keynote speaker Erik Meijer is actively working on a series of open source projects (related to his work on reactive programming). In the area of open data, we have entrpreneur François Bancilhon, and semantic web specialist Frank van Harmelen, who is head of the Network Institute of the Vrije Universiteit in Amsterdam

  4. Teaching for Innovation

    How can universities use education to strengthen innovation? What should students learn so that they can become ambassadors of change? How innovative should students be so that they can become successful in society? At the same time, how can outreach and education be broadened so that new groups of students are reached, for example via on line learning?

    To address these questions, at ECSS we have Anka Mulder, member of the executive board of Delft University of Technology, and former president of the OpenCourseWare Consortium. She is responsible for the TU Delft strategy on Massive On-Line Open Courses (MOOC), and she will share TU Delft experiences in setting up their MOOCs.

    Furthermore, ECSS will host a panel discussion, in which experienced yet non-conformist teachers and managers will share their experience in innovative teaching to foster innovation.

  5. Fostering Innovation

    Policy makers and university management are often at loss on how to encourage their stubborn academics to contribute to innovation, the “third pillar” of academia.

    Therefore, ECSS is a place for university managers to meet, as evidenced by the pre-summit Workshop for Deans, Department Chairs and Research Directors . Furthermore, we have executive board member Anka Mulder as speaker.

    Last but not least, we have Carl-Cristian Buhr, member of the cabinet of Digital Agenda Commissioner and EU Commission Vice-President Neelie Kroes, who will speak about the EU Horizon 2020 programme and its relevance to computer science research, education, and innovation.

  6. Inspriational Content

    All talk about innovation is void without inspirational content. Therefore, throughout the conference, exciting research insights and new course examples will be interwoven in the presentations.

    For example, we have Pamela Zave speaking on The Power of Abstraction, Frank van Harmelen addressing progress in his semantic web work at the Network Institute, and Felienne Hermans on how to reach thousands of people through social media. Last but not least we have Erik Meijer, who is never scared to throw both math and source code in his presentation.

The summit will take place October 7–9, 2013, in Amsterdam. You are all welcome to join!


<

p>
Update:

  • The slides I used for opening ECSS 2013

Some Research Paper Writing Recommendations

Last week, I received an email from Alex Orso and Sebastian Uchitel, who had been asked to give a talk on “How to get my papers accepted at top SE conferences” at the Latin American School on Software Engineering. Here’s their question:

We hope you can spare a few minutes to share with us the key recommendations you would give to PhD students that have not yet had successful submissions to top software engineering conferences, such as ICSE.

An interesting request, and I certainly look forward to receive some of the advice my fellow researchers will be providing you can see the advice of my fellow researchers in a presentation by Alex Orso.

When working with my students on papers, I must admit I sometimes repeat myself. Below are some of the things I hear myself say most often.

Explain the Innovation

The first thing to keep in mind is that a research paper should explain the innovation. This makes it quite different from a text book chapter, or from a hands-on tutorial. The purpose is not to explain a technique so that others can use it. Instead, the purpose of a research paper is to explain what is new about the proposed technique.

Identify the Contributions

Explaining novelty is driven by contributions. A contribution is anything the world did not know before this paper, but which we now do know thanks to this paper.

I tend to insist on an explicit list of contributions, which I usually put at the end of the paper.

“The contributions of this paper are …”

Each contribution is an outcome, not the process of doing something. Contributions are things, not effort. Thus, “we spent 6 months manually analyzing 500,000 commit messages” is not a contribution. This effort, though, hopefully has resulted in a useful contribution, which may be that “for projects claiming to do test-driven development, to our surprise we found that 75% of the code commits are not accompanied by a commit in the test code.”

Usually, when thinking about the structure of a paper, quite a bit of actual research has been done already. It is then important to reassess everything that has been done, in order to see what the real contributions of the research are. Contributions can include a new experimental design, a novel technique, a shared data set or open source tool, as well as new empirical evidence contradicting, confirming, or enriching existing theories.

Structure the Paper

With the contributions laid out, the structure of the paper appears naturally: Each contribution corresponds to a section.

This does not hold for the introductory and concluding sections, but it does hold for each of the core sections.

Furthermore, it is essential to separate background material from own contributions. Clearly, most papers will rely on existing theories or techniques. These must be explained. Since the goal of the paper is to explain the innovation, all material that is not new should be clearly isolated. In this way, it easiest for the reader (and the reviewer) to see what is new, and what is not new, about this paper.

As an example, take a typical structure of a research paper:

  1. Introduction
  2. Background: Cool existing work that you build upon.
  3. Problem statement: The deficiency you spotted
  4. Conceptual solution: A new way to deal with that problem!
  5. Open source implementation: Available for everyone!
  6. Experimental design for evaluation: Trickier than we thought!
  7. Evaluation results: It wasn’t easy to demonstrate, but yes, we’ve good evidence that this may work!
  8. Discussion: What can we do with these results? Promising ideas for future research or applications? And: a critical analysis of the threats to the validity of our results.
  9. Related work
  10. Concluding remarks.

In such a setup, sections 4-7 can each correspond to a contribution (and sometimes to more than one). The discussion section (8) is much more speculative, and usually does not contribute solid new knowledge.

Communicate the Contributions

Contributions not just help in structuring a paper.

They are also the key aspect program committees look at when deciding about acceptence of a paper.

When reviewers evaluate a paper, they try to identify, and interpret the contributions. Are these contributions really new? Are they important? Are they trivial? Did the authors provide sufficient evaluations for their claims? The paper should help the reviewer, by being very explicit about the contributions and the claims to fame of these contributions.

When program committee members discuss a paper, they do so in terms of contributions. Thus, contributions should not just be strong, they should also be communicable.

For smaller conferences, it is safe to assume that all reviewers are epxerts. For large conferences, such as ICSE, the program committee is broad. Some of reviewers will be genuine experts on the topic of the paper, and these reviewers should be truly excited about the results. Other reviewers, however, will be experts in completely different fields, and may have little understanding of the paper’s topic. When submitting to prestigious yet broad conferences, it is essential to make sure that any reviewer can understand and appreciate the contributions.

The ultimate non-expert is the program chair. The chair has to make a decision on every paper. If the program chair cannot understand a paper’s contributions, it is highly unlikely that the paper will get accepted.

Share Contributions Early

Getting a research paper, including its contributions, right, is hard. Especially since contributions have to be understandable by non-experts.

Therefore, it is crucial to offer help to others, volunteering to read preliminary drafts of papers, assessing the strength of theircontributions. In return, you’ll have other people, possibly non-experts, assess the drafts you are producing, in this way helping each other to publish a paper at this prestigious conference.

But wait. Isn’t helping others a bad idea for highly competitive conferences? Doesn’t it reduce one’s own chances?

No. Software engineering conferences, including ICSE and FSE, accept any paper that is good. Such conferences do not work with accpetance rates that are fixed in advance. Thus, helping each other may increase the acceptance rate, but will not negatively affect any author.

Does This Help?

I hope some of these guidelines will be useful to “PhD students that have not yet had successful submissions to top software engineering conferences, such as ICSE.”

A lot more advice is available on the Internet on how to write a research paper. I do not have a list of useful resources available at the time of writing, but perhaps in the near future I will extend this post with useful additional pointers.

Luckily, this post is not a research paper. None of the ideas presented here is new. But they have worked for me, and I hope they’ll work for you too.


Image credits: Pencils in the Air, by Peter Logan, Photo by Mira66. flickr

Green Open Access and Preprint Linking

ICSE 2013

One of the most useful changes to the ICSE International Conference on Software Engineering this year, was that the program website contained links to preprints of many of the papers presented.

As ICSE is a large event (over 1600 people attended in 2013), it is worth taking a look at what happened. What is preprint linking? How many authors actually provided a preprint link? What about other conferences? What are the wider implications for open access publishing in software engineering?

Self-Archiving

Preprint linking is based on the idea that authors, who do all the work in both writing and formating the paper, have the right to self-archive the paper they created themselves (also called green open access). Authors can do this on their personal home page, in institutional repositories of, e.g., the universities where they work or in public preprint repositories such as arxiv.

Sharing preprints has been around in science since decades (if not ages): As an example, my ‘alma mater’ CWI was founded in 1947, and has a technical report series dating back to that year. These technical reports were exchanged (without costs) with other mathematical research institutes. First by plain old mail, then by email, later via ftp, and now through http.

While commercial publishers may dislike the idea that a free preprint is available for papers they publish in their journals or conference proceedings, 69% of the publishers do in fact allow (some form of) self-archiving. For example, ACM, IEEE, Springer, and Elsevier (the publishers I work most with) explicitly permit it, albeit always under specific conditions. These conditions can usually be met, and include such requirements as providing a note that the paper has been accepted for publication, a pointer to the URL where the published article can be found, and a copyright notice indicating the publisher now owns the copyright.

Preprint links shown on ICSE site.

Preprint links as shown on ICSE site.

Preprint Linking

All preprint linking does, is ask authors of accepted conference papers, whether they happen to have a link to a preprint available. If so, the conference web site will include a link to this preprint in its progam as listed on its web site.

For ICSE, doing full preprint linking at the conference site was proposed and conducted by Dirk Beyer, after an earlier set of preprint links was collected on a separate github gist by Adrian Kuhn.

Dirk Beyer runs Conference Publishing Consulting, the organization hired by ICSE to collect all material to be published, and get it ready for inclusion in the ACM/IEEE Digital Libraries. As part of this collection process, ICSE asked the authors to provide a link to a preprint, which, if provided, was included in the ICSE on line program.

The ICSE 2013 proceedings were published by IEEE. In their recently updated policy, they indicate that “IEEE will make available to each author a preprint version of that person’s article that includes the Digital Object Identifier, IEEE’s copyright notice, and a notice showing the article has been accepted for publication.” Thus, for ICSE, authors were provided with a possibility to download this version, which they then could self-archive.

Preprints @ ICSE 2013

With a preprint mechanism setup at ICSE, the next question is how many researchers actually made use of it. Below are some statistics I collected from the ICSE conference site:

Track / Conference #Papers presented #Preprints Percentage
Research Track 85 49 57%
ICSE NIER 31 19 61%
ICSE SEIP 19 6 31%
ICSE Education 13 3 23%
ICSE Tools 16 7 43%
MSR 64 36 56%
Total 228 120 53%

 

In other words, a little over half of the authors (53%) provided a preprint link. And, almost half of the authors decided not to.

I hope and expect that for upcoming ICSE conferences, more authors will submit their preprint links. As a comparison, at the recent FORTE conference, 75% of the authors submitted a preprint link.

For ICSE, this year was the first time preprint linking was available. Authors may have not been familiar with the phenomenon, may not have realized in advance how wonderful a program with links to freely available papers is, may have missed the deadline for submitting the link, or may have missed the email asking for a link altogether as it ended up in their spam folder. And, in all honesty, even I managed to miss the opportunity to send in my link in time for some of my MSR 2013 papers. But that won’t happen again.

Preprint Link Sustainability

An issue of some concern is the “sustainability” of the preprint links — what happens, for example, to homepages with preprints once the author changes jobs (once the PhD student finishes)?

The natural solution is to publish preprints not just on individual home pages, but to submit them to repositories that are likely to have a longer lifetime, such as arxiv, or your own technical report series.

An interesting route is taken by ICPC, which instead of preprint links simply provides a dedicated preprint search on Google Scholar, with authors and title already filled in. If a preprint has been published somewhere, and the author/title combination is sufficiently unique, this works remarkably well. MSR uses a mixture of both appraoches, by providing a search link for presentations for which no preprint link was provided.

Implications

Open access, and hence preprint publishing, is of utmost importance for software engineering.

Software engineering research is unique in that it has a potentially large target audience of developers and software engineering practitioners that is on line continually. Software engineering research cannot afford to dismiss this audience by hiding research results behind paywalls.

For this reason, it is inevitable that on the long run, software engineering researchers will transform their professional organizations (ACM and IEEE) so that their digital libraries will make all software engineering results available via open access.

Irrespective of this long term development, the software engineering research community must hold on to the new preprint linking approach to leverage green open access.

Thus:

  1. As an author, self-archive your paper as a preprint or technical report. Consider your paper unpublished if the preprint is not available.
  2. If you are a professor leading a research group, inspire your students and group members to make all of their publications available as preprint.
  3. If you are a reviewer for a conference, insist that your chairs ensure that preprint links are collected and made available on the conference web site.
  4. If you are a conference organizer or program chair, convince all authors to publish preprints, and make these links permanently available on the conference web site.
  5. If you are on a hiring committee for new university staff members, demand that candidates have their publications available as preprints.

Much of this has been possible for years. Maybe one of the reasons these practices have not been adopted in full so far, is that they cost some time and effort — from authors, professors, and conference organizers alike — time that cannot be used for creative work, and effort that does not immediately contribute to tenure or promotion. But it is time well spent, as it helps to disseminate our research to a wider audience.

Thanks to the ICSE move, there now may be a momentum to make a full swing transition to green open access in the software eningeering community. I look forward to 2014, when all software engineering conferences will have adopted preprint linking, and 100% of the authors will have submitted their preprint links. Let us not miss this unique opportunity.

Acknowledgments

I am grateful to Dirk Beyer, for setting up preprint linking at ICSE, and for providing feedback on this post.

Update (Summer 2013)