PhD Student Vacancy in Test Amplification

Within the Software Engineering Research Group of Delft University of Technology, we are looking for an enthusiastic and strong PhD student in the area of “test amplification”.

The PhD project will be in the context of the new STAMP project funded by the H2020 programme of the European Union.

STAMP is a 3-year R&D project, which leverages advanced research in automatic test generation to push automation in DevOps one step further through innovative methods of test amplification. It will reuse existing assets (test cases, API descriptions, dependency models), in order to generate more test cases and test configurations each time the application is updated. This project has an ambitious agenda towards industry transfer. In this regard, the STAMP project gathers 3 research groups which have strong expertise in software testing and continuous development as well as 6 industry partners that develop innovative open source software products.

The STAMP project is led by Benoit Baudry from INRIA, France. The STAMP consortium consists of the following partners

The PhD student employed by Delft University of Technology will conduct research as part of the STAMP project together with the STAMP partners. Employment will be for a period of four years. The PhD student will enroll in the TU Delft Graduate School.

The primary line of research for the TU Delft PhD student will revolve around runtime test amplification. Online test amplification automatically extracts information from logs collected in production in order to generate new tests that can replicate failures, crashes, anomalies and outlier events. The research will be devoted to (i) defining monitoring techniques and log data analytics to collect run-time information; (ii) detecting interesting behaviors with respect to existing tests; (iii) creating new tests for testing the behaviors of interest, for example through state machine learning or genetic algorithms; (iv) adding new probes and new log messages into the production code to improve its testability.

stamp-wps

Besides this primary line of research, the PhD student will be involved in lines of research led by the other STAMP partners, addressing unit test amplification and configurability test amplification. Furthermore, the PhD student will be involved in case studies and evaluations conducted in collaboration with the industrial partners in the consortium.

From the TU Delft Software Engineering group, several people will be involved, including Arie van Deursen (principal investigator), Andy Zaidman, and Mauricio Aniche. Furthermore, where possible collaborations with existing projects will be setup, such as the 3TU Big Software on the Run and TestRoot projects.

Requirements for the PhD candidate include:

  • Being a team player;
  • Strong writing and presentation skills;
  • Being hungry for new knowledge in software engineering;
  • Ability to develop prototype research tools;
  • Interest in bringing program analysis, testing, and genetic algorithms together;
  • Eagerness to work with the STAMP partners on test amplification in their contexts;
  • Completed MSc degree in computer science

For more information on this vacancy and the STAMP project, please contact Arie van Deursen.

To apply, please follow the instructions of the official opening at the TU Delft Vacancies pages. Your letter letter should include a clear motivation why you want to work on the STAMP project, and an explanation of what you can bring to the STAMP project. Also provide your CV, (pointers to) written material (e.g. a term paper, an MSc thesis, or published conference or journal papers), and if possible pointers to (open source) software projects you have contributed to.

The vacancy will be open until 2 February 2017, but applying early never hurts. We look forward to receiving your application!

Embedded Software Development with C Language Extensions

Arie van Deursen, with Markus Voelter, Bernd Kolb, and Stephan Eberle.

In embedded systems development, C remains the dominant programming language, because it permits writing low level algorithms and producing efficient binaries. Unfortunately, the price to pay for this is limited support for explicit and safe abstractions.

To overcome this, engineers at itemis and fortiss created mbeddr: an extensible version of C that comes with extensions relevant to embedded software development. Examples include explicit support for state machines, variability management, physical units, interfaces and components, or unit testing. The extensions are supported by an IDE created through JetBrains MPS. Furthermore, mbeddr users can introduce their own extensions.

To me, the ideas under mbeddr are extremely appealing. But I also had concerns: Would this work in practice? Does this scale to real world embedded systems? What are the benefits of such an approach? What are the problems?

Therefore, when Markus Voelter, lead architect of mbeddr invited me to join in a critical evaluation of a system created with mbeddr that they just shipped, I happily accepted. Eventually, this resulted in our paper Using C Language Extensions for Developing Embedded Software: A Case Study, which was accepted for publication and presentation at OOPSLA 2015.

The subject system built with mbeddr is an electricity smart meter, which continuously senses the instantaneous voltage and current on a mains line using analog front ends and analog-to-digital converters. It’s mbeddr implementation consists of 80 interfaces and 167 components, corresponding to roughly 44,000 lines of C code.

Main layers, sub-systems, and components of the smart metering system.

Main layers, sub-systems, and components of the smart metering system.

Our goal in analyzing this system was to find out the degree to which C language extensions (as implemented in mbeddr) are useful for developing embedded software. We adopted the case study research method to investigate the use of mbeddr in an actual commercial project, since the true risks and benefits of language extensions can be observed only in such projects. Focussing on a single case allows us to provide significant details about that case.

To achieve this goal, we investigated the following aspects of the smart metering system:

  1. Complexity: Are the abstractions provided by mbeddr beneficial for mastering the complexity encountered in a real-world embedded system? Which additional abstractions would be needed or useful?
  2. Testing: Can the mbeddr extensions help with testing the system? In particular, is hardware-independent testing possible to support automated, continuous integration and build? Is incremental integration and commissioning supported?
  3. Overhead: Is the low-level C code generated from the mbeddr extensions efficient enough for it to be deployable onto a real-world embedded device?
  4. Effort: How much effort is required for developing embedded software with mbeddr?

The detailed analysis and answers are in the paper. Our main findings are the following:

  • The extensions help mastering complexity and lead to software that is more testable, easier to integrate and commission, and that is more evolvable.
  • Despite the abstractions introduced by mbeddr, the additional overhead is very low and acceptable in practice.
  • The development effort is reduced, particularly regarding evolution and commissioning.

In our paper, we also devote four pages to potential threats to the validity of our findings. Most importantly, in our experience with this case study and other projects, introducing mbeddr into an organization may be difficult, despite these benefits, due to a lack of developer skills and the need to adapt the development process.

The key insight for me is that mbeddr can help bring down one of the biggest cost and risk factors in embedded systems development, which is the integration and commissioning on the target hardware. Typically, this phase accounts for 40-50% of the project cost; for the smart meter system this was 13%. This was achieved by extensive unit and integration testing, using interfaces that could be instantiated both in a test as well as a target hardware environment.

Continuous integration is not just about the use of a continuous integration server. It is primarily about carefully modularizing the system into components that can be tested independently in different environments. Unfortunately, modularization is hard, especially in languages without explicit modularization primitives. Our study shows how extending C with language constructs can help to devise a modular, testable architecture, substantially reducing integration and commissioning costs.

For more information, see:

  • Markus Völter, Arie van Deursen, Bernd Kolb, Stephan Eberle. Using C Language Extensions for Developing Embedded Software: A Case Study. OOPSLA/SPLASH 2015 (pdf).
  • Presentation at OOSPLA 2015 by Markus Voelter (youtube, slides)
  • Information on this paper at the OOPSLA program pages.

Delft Technology Fellowship for Top Female (Computer) Scientists

TU Delft Logo

Delft University of Technology is aiming to substantially increase the number of top female faculty members. To help accelerate this, the Delft Technology Fellowship offers high-profile, tenure-track positions to top female scientists in research fields in which Delft University of Technology (TU Delft) is active.

One of those fields is of course Computer Science — so if you’re a female computer scientist (or software engineering researcher!) interested in working as an assistant, associate or even full professor (depending on your experience) at the departments of Computer Science and Engineering of the TU Delft Faculty of Electrical Engineering, Mathematics, and Computer Science (EEMCS), please consider applying.

Previous rounds of the TU Delft Fellowship program were held in 2012 and 2014. In both years, 9 top scientists were hired, in such diverse fields as interactive media design, protein machines, solid state physics, climate change, and more.

Since applicants can come from any field of research, the competition for the TU Delft fellowship program is fierce. The program is highly international, with just four out of the current 18 fellows from The Netherlands. As a fellow, you should be the best in your field, and you should be able to explain to non computer scientists what makes you so good.

As a Delft Technology Fellow, you can propose your own research program. As in previous years, it can be in any research field in which TU Delft is active, such as computer science.

The computer science and engineering research at TU Delft is organized into 12 so-called sections, covering such topics as algorithmics, embedded software, cyber security, pattern recognition, and my own topic software engineering. Each section consists of around four faculty members and 10-15 PhD students, and is typically headed by one full professor. PhD students are usually externally funded, through government subsidies obtained in competition, or via collaborations with industry.

As a fellow at the EEMCS faculty, you are expected to bring your own topic. You would, however, typically be working within one of the existing sections. Thus, if you apply, it makes sense to identify the section that is most related your area of work, and explore if you see collaboration opportunities. To that end, you can contact any of the section leaders, or me if you want to discuss where your topic would fit best. Naturally, if you are in software engineering, also feel free to contact me, or any current SERG group member.

For formal instructions on how to apply, please consult the Fellowship web site. The application procedure is open from 12 October 2015 until 8 January 2016.

PhD/PostDoc Vacancies in Persistent Code Reviews

logo-nwo

In the fall 2015 we are starting a brand new project that we titled Persistent Code Reviewing, funded by NWO. If you’re into code reviews, software quality, or software testing, please consider applying for a position as PhD student or Postdoc within this project.

To quote the abstract of the project proposal:

Code review is the manual assessment of source code by human reviewers. It is mainly intended to identify defects and quality problems in code changes before deployment in production. Code review is widely recommended: Several studies have shown that it supports software quality and reliability crucially. Properly doing code reviews requires expensive developer time and zeal, for each and every reviewed change.

The goal of “Persistent Code Reviews” project is to make the efforts and knowledge that reviewers put in a code review available outside the code change context to which they are directed.

Naturally, given my long term interest in software testing, we will include any test activities (test design and execution, test adequacy considerations) that affect the reviewing process in our analysis.

The project is funded by the Top Programme of NWO, the Netherlands Organization for Scientific Research.

Within the project, we have openings for two PhD students and one postdoctoral researcher. The research will be conducted at the Software Engineering Research Group (SERG) of Delft University of Technology in The Netherlands. At SERG, you will be working in a team of around 25 researchers, including 6 full time faculty members.

In this project you will be supervised by Alberto Bacchelli and myself. To learn more about any of these positions, please contact one of us.

Requirements for all positions include:

  • Being a team player;
  • Strong writing and presentation skills;
  • Being hungry for new knowledge in software engineering;
  • Ability to develop prototype research tools;
  • Interest in bringing program analysis, testing, and human aspects of software engineering together.

To apply, please send us an application letter, a CV, and (pointers) to written material (e.g. a term paper or an MSc thesis for applicants for the PhD positions, and published papers or the PhD thesis for the postdoc).

We are in the process of further distributing this announcement: Final decisions on the appointments will be made end of October.

We look forward to receiving your application as soon as possible.

In Vivo Software Analytics: PhD/Postdoc positions

Last week, we had the kickoff of a new project we are participating in addressing “In Vivo Software Analytics”. In this project, called “Big Software on the Run” (BSR) we monitor the quality of software in its “natural habitat”, i.e., as it is running in the wild. The project is a collaboration between the three technical universities (3TU) of The Netherlands (Eindhoven, Twente, Delft).

In Vivo Software Analytics

To quote the 3TU.BSR plan:

Millions of lines of code – written in different languages by different people at different times, and operating on a variety of platforms – drive the systems performing key processes in our society. The resulting software needs to evolve and can no longer be controlled a priori as is illustrated by a range of software problems. The 3TU.BSR research program will develop novel techniques and tools to analyze software systems in vivo – making it possible to visualize behavior, create models, check conformance, predict problems, and recommend corrective actions.

Essentially, we propose to address big software by applying big data techniques to system health information obtained at run time. It provides feedback from operations to developers, in order to make systems more resilient against the risks that come with rapid change.

The project brings together some of the best softare engineering and data science groups and researchers of the three technical universities in The Netherlands:

The project is sponsored by NIRICT, the 3TU center for Netherlands Resaerch in Information and Communication Technology.

The project duration is four years. At each of the three technical universities two PhD students and one one postdoc will be employed. To maxize collaboration, each PhD student has two supervisors, from two different universities. Furthermore, the full research team, including all supervisors, PhD students, and postdocs, will regularly visit each other.

Within the Delft Software Engineering Research Group, we are searching for one PhD student and one postdoc to strengthen the 3TU.BSR project team.

The PhD student we are looking for will work on the intersection between visualization and dynamic program analysis. In particular, we are searching for a PhD student to work on log event analysis, and visualization of anomalies and exceptions as occurring in traces of running systems. The PhD student will be jointly supervised by Jack van Wijk and myself.

The postdoctoral researcher we are looking for should be able to establish connections between the various research themes and groups working on the project (such as visualization, process mining, repository mining, privacy-preserving log file analysis, model checking). Thus, we are looking for a researcher who successfully completed his or her PhD thesis, and is open to work with various of the six PhD students within the project. The postdoc will be based in the Software Engineering Research Group.

Requirements for both positions include:

  • Being a team player;
  • Strong writing and presentation skills;
  • Being hungry for new knowledge in software engineering;
  • Ability to develop prototype research tools;
  • Interest in bringing visualization, run time analysis, and human aspects of software engineering together.

To apply, please send me an application letter, a CV, and (pointers) to written material (e.g. a term paper or an MSc thesis for applicants for the PhD position, and published papers or the PhD thesis for the postdoc).

We are in the process of further distributing this announcement: Final decisions on the appointments will be made end of October.

I look forward to receiving your application!

3TU.BSR Tracks

Think Twice Before Using the “Maintainability Index”

Code metrics results in VS2010

This is a quick note about the “Maintainability Index”, a metric aimed at assessing software maintainability, as I recently run into developers and researchers who are (still) using it.

The Maintainability Index was introduced at the International Conference on Software Maintenance in 1992. To date, it is included in Visual Studio (since 2007), in the recent (2012) JSComplexity and Radon metrics reporters for Javascript and Python, and in older metric tool suites such as verifysoft.

At first sight, this sounds like a great success of knowledge transfer from academic research to industry practice. Upon closer inspection, the Maintainability Index turns out to be problematic.

The Original Index

The Maintainabilty Index was introduced in 1992 by Paul Oman and Jack Hagemeister, originally presented at the International Conference on Software Maintenance ICSM 1992 and later refined in a paper that appeared in IEEE Computer. It is a blend of several metrics, including Halstead’s Volume (HV), McCabe’s cylcomatic complexity (CC), lines of code (LOC), and percentage of comments (COM). For these metrics, the average per module is taken, and combined into a single formula:

formula

To arrive at this formula, Oman and Hagemeister started with a number of systems from Hewlett-Packard (written in C and Pacscal in the late 80s, “ranging in size from 1000 to 10,000 lines of code”). For each system, engineers provided a rating (between 1-100) of its maintainability. Subsequently, 40 different metrics were calculated for these systems. Finally, statistical regression analysis was applied to find the best way to combine (a selection of) these metrics to fit the experts’ opinion. This eventually resulted in the given formula. The higher its value, the more maintainable a system is deemed to be.

The maintainability index attracted quite some attention, also because the Software Engineering Institute (SEI) promoted it, for example in their 1997 C4 Software Technology Reference Guide. This report describes the Maintainability Index as “good and sufficient predictors of maintainability”, and “potentially very useful for operational Department of Defense systems”. Furthermore, they suggest that “it is advisable to test the coefficients for proper fit with each major system to which the MI is applied.”

Use in Visual Studio

Visual Studio Code Metrics were announced in February 2007. A November 2007 blogpost clarifies the specifics of the maintainability index included in it. The formula Visual Studio uses is slightly different, based on the 1994 version:

Maintainability Index =
  MAX(0, (171 - 5.2 * ln(Halstead Volume)
             - 0.23 * Cyclomatic Complexity
             - 16.2 * ln(Lines of Code)
         ) * 100 / 171)

As you can see, the constants are literally the same as in the original formula. The new definition merely transforms the index to a number between 0 and 100. Also, the comment metrics has been removed.

Furthermore, Visual Studio provides an interpretation:

MI >= 20 High Maintainabillity
10 <= MI < 20 Moderate Maintainability
MI < 10 Low Maintainability

 

I have not been able to find a justification for these thresholds. The 1994 IEEE Computer paper used 85 and 65 (instead of 20 and 10) as thresholds, describing them as a good “rule of thumb”.

The metrics are available within Visual Studio, and are part of the code metrics power tools, which can also be used in a continuous integration server.

Concerns

I encountered the Maintainability Index myself in 2003, when working on Software Risk Assessments in collaboration with SIG. Later, researchers from SIG published a thorough analysis of the Maintainability Index (first when introducing their practical model for assessing maintainability and later as section 6.1 of their paper on technical quality and issue resolution).

Based on this, my key concerns about the Maintainability Index are:

  1. There is no clear explanation for the specific derived formula.
  2. The only explanation that can be given is that all underlying metrics (Halstead, Cyclomatic Complexity, Lines of Code) are directly correlated with size (lines of code). But then just measuring lines of code and taking the average per module is a much simpler metric.
  3. The Maintainability Index is based on the average per file of, e.g., cyclomatic complexity. However, as emphasized by Heitlager et al, these metrics follow a power law, and taking the average tends to mask the presence of high-risk parts.
  4. The set of programs used to derive the metric and evaluate it was small, and contained small programs only.
  5. Furthermore, the programs were written in C and Pascal, which may have rather different maintainability characteristics than current object-oriented languages such as C#, Java, or Javascript.
  6. For the experiments conducted, only few programs were analyzed, and no statistical significance was reported. Thus, the results might as well be due to chance.
  7. Tool smiths and vendors used the exact same formula and coefficients as the 1994 experiments, without any recalibration.

One could argue that any of these concerns is reason enough not to use the Maintainability Index.

These concerns are consistent with a recent (2012) empirical study, in which one application was independently built by four different companies. The researchers used these systems two compare maintainability and several metrics, including the Maintainability Index. Their findings include that size as a measure of maintainability has been underrated, and that the “sophisticated” maintenance metrics are overrated.

Think Twice

In summary, if you are a researcher, think twice before using the maintainability index in your experiments. Make sure you study and fully understand the original papers published about it.

If you are a tool smith or tool vendor, there is not much point in having several metrics that are all confounded by size. Check correlations between the metrics you offer, and if any of them are strongly correlated pick the one with the clearest and simplest explanation.

Last but not least, if you are a developer, and are wondering whether to use the Maintainability Index: Most likely, you’ll be better off looking at lines of code, as it gives easier to understand information on maintainability than a formula computed over averaged metrics confounded by size.

Further Reading

  1. Paul Omand and Jack Hagemeister. “Metrics for assessing a software system’s maintainability”. Proceedings International Conference on Software Mainatenance (ICSM), 1992, pp 337-344. (doi)
  2. Paul W. Oman, Jack R. Hagemeister: Construction and testing of polynomials predicting software maintainability. Journal of Systems and Software 24(3), 1994, pp. 251-266. (doi).
  3. Don M. Coleman, Dan Ash, Bruce Lowther, Paul W. Oman. Using Metrics to Evaluate Software System Maintainability. IEEE Computer 27(8), 1994, pp. 44-49. (doi, postprint)
  4. Kurt Welker. The Software Maintainability Index Revisited. CrossTalk, August 2001, pp 18-21. (pdf)
  5. Maintainability Index Range and Meaning. Code Analysis Team Blog, blogs.msdn, 20 November 2007.
  6. Ilja Heitlager, Tobias Kuipers, Joost Visser. A practical model for measuring maintainability. Proceedings 6th International Conference on the Quality of Information and Communications Technology, 2007. QUATIC 2007. (scholar)
  7. Dennis Bijlsma, Miguel Alexandre Ferreira, Bart Luijten, and Joost Visser. Faster Issue Resolution with Higher Technical Quality of Software. Software Quality Journal 20(2): 265-285 (2012). (doi, preprint). Page 14 addresses the Maintainability Index.
  8. Khaled El Emam, Saida Benlarbi, Nishith Goel, and Shesh N. Rai. The Confounding Effect of Class Size on the Validity of Object-Oriented Metrics. IEEE Transactions on Software Engineering, 27(7):630:650, 2001. (doi, preprint)
  9. Dag Sjøberg, Bente Anda, and Audris Mockus. Questioning software maintenance metrics: a comparative case study. Proceedings of the ACM-IEEE international symposium on Empirical software engineering and measurement (ESEM), 2012, pp. 107-110. (doi, postprint).
Edit September 2014

Included discussion on Sjøberg’s paper, the thresholds in Visual Studio, and the problems following from averaging in a power law.


© Arie van Deursen, August 2014.

Dimensions of Innovation

ECSS Amsterdam

As an academic in software engineering, I want to make the engineering of software more effective so that society can benefit even more from the amazing potential of software.

This requires not just good research, but also success in innovation: ensuring that research ideas are adopted in society. Why is innovation so hard? How can we increase innovation in software engineering and computer science alike? What can universities do to increase their innovation success rate?

I was made to rethink these questions when Informatics Europe contacted me to co-organize this year’s European Computer Science Summit (ECSS 2013) with special theme Informatics and Innovation.

Informatics Europe is an organization of Computer Science Departments in Europe, founded in 2005. Its mission is to foster the development of quality research and teaching in information and computer sciences, also known as Informatics. In its yearly summit, deans and heads of department get together, to share experience in leading academic groups, and to join forces when undertaking new activities at a European level.

When Informatics Europe asked me as program chair of their two day summit, my only answer could be “yes”. I have a long term interest in innovation, and here I had the opportunity and freedom to compile a full featured program on innovation as I saw fit, with various keynotes, panels, and presentations by participants — a wonderful assignment.

In compiling the program I took the words of Peter Denning as starting point: “an idea that changes no one’s behavior is only an invention, not an innovation.” In other words, innovation is about changing people, which is much harder than coming up with a clever idea.

In the end, I came up with the following “Dimensions of Innovations” that guided the composition of the program.

  1. Startups

    Innovation needs optimists who believe they can change the world. One of the best ways to bring a crazy new idea to sustainable success is by starting a new company dedicated to creating and conquering markets that had not existed before.

    Many of the activities at ECSS 2013 relate to startups. The opening keynote is by François Bancilhon, serial entrepreneur currently working in the field of open data. Furthermore, we have Heleen Kist, strategy consultant in the area of access to finance. Last but not least, the first day includes a panel, specifically devoted to entrepreneurship, and one of the pre-summit workshops is entirely devoted to entrepreneurship for faculty.

  2. Patents

    Patents are traditionally used to protect (possibly large) investments that may be required for successful innovation. In computer science, patents are more than problematic, as evidenced by patent trolls, fights between giants such as Oracle and Google, and the differences in regulations in the US and in Europe. Yet at the same time (software) patents can be crucial, for example to attract investors for a startup.

    Several of the ECSS keynote speakers have concrete experience with patents — Pamela Zave at AT&T, and Erik Meijer from his time at Microsoft, when he co-authored hundreds of patents. Furthermore, Yannis Skulikaris of the European Patent Office will survey patenting of software-related inventions.

  3. Open Source, Open Data

    An often overlooked dimension of innovation are open source and open data. How much money can be made by giving away software, or by allowing other to freely use your data? Yet, many enterprises are immensely successful based on open source and open data.

    At ECSS, keynote speaker Erik Meijer is actively working on a series of open source projects (related to his work on reactive programming). In the area of open data, we have entrpreneur François Bancilhon, and semantic web specialist Frank van Harmelen, who is head of the Network Institute of the Vrije Universiteit in Amsterdam

  4. Teaching for Innovation

    How can universities use education to strengthen innovation? What should students learn so that they can become ambassadors of change? How innovative should students be so that they can become successful in society? At the same time, how can outreach and education be broadened so that new groups of students are reached, for example via on line learning?

    To address these questions, at ECSS we have Anka Mulder, member of the executive board of Delft University of Technology, and former president of the OpenCourseWare Consortium. She is responsible for the TU Delft strategy on Massive On-Line Open Courses (MOOC), and she will share TU Delft experiences in setting up their MOOCs.

    Furthermore, ECSS will host a panel discussion, in which experienced yet non-conformist teachers and managers will share their experience in innovative teaching to foster innovation.

  5. Fostering Innovation

    Policy makers and university management are often at loss on how to encourage their stubborn academics to contribute to innovation, the “third pillar” of academia.

    Therefore, ECSS is a place for university managers to meet, as evidenced by the pre-summit Workshop for Deans, Department Chairs and Research Directors . Furthermore, we have executive board member Anka Mulder as speaker.

    Last but not least, we have Carl-Cristian Buhr, member of the cabinet of Digital Agenda Commissioner and EU Commission Vice-President Neelie Kroes, who will speak about the EU Horizon 2020 programme and its relevance to computer science research, education, and innovation.

  6. Inspriational Content

    All talk about innovation is void without inspirational content. Therefore, throughout the conference, exciting research insights and new course examples will be interwoven in the presentations.

    For example, we have Pamela Zave speaking on The Power of Abstraction, Frank van Harmelen addressing progress in his semantic web work at the Network Institute, and Felienne Hermans on how to reach thousands of people through social media. Last but not least we have Erik Meijer, who is never scared to throw both math and source code in his presentation.

The summit will take place October 7–9, 2013, in Amsterdam. You are all welcome to join!


<

p>
Update:

  • The slides I used for opening ECSS 2013