15 September
SFScon'21
DECODER and FASTEN projects showcase their active campaigns on ReachOut
04 August
OLYMPUS Campaign is Active
Try the distributed identity management demonstrator in your browser, then on...
02 July
Camel Designer Campaign is now Open
Try the new Modelio CAMEL Designer now!
24 June
Parsec Campaign is Open
Try Parsec on your own workstation and start sharing sensitive data stored on...
24 June
OW2con'21 ReachOut Presentation
Date: 24/6/2021 at 9:30

ReachOut for Project Leaders

You are a project leader?

Set-up a beta-testing campaign
 for your project!

  • Register your project
  • Arrange a training session
  • Promote the campaign
  • Learn from feedback

Improve your software
Align with market expectations

ReachOut for Beta Testers

You are a beta tester?

Check out Existing Campaigns

Participate in research project
 beta-testing campaigns!

  • Choose your beta-testing job
  • Execute the tutorial
  • Answer feedback questions
  • Pick-up your reward

Look inside state-of-the-art software
Enhance your professional network


Check out these campaigns

Wayeb

Wayeb

Wayeb is a Complex Event Processing and Forecasting (CEP/F) engine written in Scala. It is based on symbolic automata and Markov models.

▼ Click for campaign details and rewards

Wayeb is a Complex Event Processing and Forecasting (CEP/F) engine written in Scala. It is based on symbolic automata and Markov models.

 

Wayeb

Starts on:

01/05/2021

Ends on:

31/12/2021

Estimated Test Duration:

30 min

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Developers

Beta tester level:

Beginner

Campaign objectives

As Wayeb is going to be released in the next months we need to make sure that all of its functions work properly in the maritime and in the bio domain. 

Requirements for this campaign

In order to build Wayeb from the source code you need to have Java SE version 8 or higher and SBT installed in your system. Java 8 is recommended.
More details about the building process you can find here https://github.com/ElAlev/Wayeb/blob/main/README.md

Beta test instructions and scenario

The tests that should be done are about the building, the recognition and the forecast processes.

1)Building: First donwnload Wayeb from here https://github.com/ElAlev/Wayeb. Assuming $WAYEB_HOME is the root directory of Wayeb:

$ cd $WAYEB_HOME

Then build a fat jar:

$ sbt assembly
If it prints a success message, it passes the test.

2) Recognition: In $WAYEB_HOME/data/demo/data.csv you may find a very simple dataset, consisting of 100 events. The event type is either A, B or C. In $WAYEB_HOME/patterns/demo/a_seq_b_or_c.sre you may find a simple complex event definition for the above dataset. It detects an event of type A followed by another event of type B or C. If we want to run this pattern over the stream, we must first compile this pattern into an automaton (make sure you have created a results folder under $WAYEB_HOME):

$ java -jar cef/target/scala-2.12/wayeb-0.2.0-SNAPSHOT.jar compile --patterns:patterns/demo/a_seq_b_or_c.sre --outputFsm results/a_seq_b_or_c.fsm

Now, results/a_seq_b_or_c.fsm is the produced serialized finite state machine. Note that we also provided as input a declarations.sre file. This file simply lets the engine know that the three predicates IsEventTypePredicate(A), IsEventTypePredicate(B) and IsEventTypePredicate(C) are mutually exclusive (i.e., an event can have only one type). This helps the compiler create a more compact automaton. We can use this FSM to perform event recognition on this simple dataset:

$ java -jar cef/target/scala-2.12/wayeb-0.2.0-SNAPSHOT.jar recognition --fsm:results/a_seq_b_or_c.fsm --stream:data/demo/data.csv --statsFile:results/recstats

If it prints information about the throughput and the number of matches, it recognizes the pattern in the stream and it passes the test. 

3) Forecasting: For forecasting, we first need to use a training dataset in order to learn a probabilistic model for the FSM. For this simple guide, we will use $WAYEB_HOME/data/demo/data.csv both as a training and as a test dataset, solely for convenience. Normally, you should use different datasets.

We first run maximum likelihood estimation:

$ java -jar cef/target/scala-2.12/wayeb-0.2.0-SNAPSHOT.jar mle --fsm:results/a_seq_b_or_c.fsm --stream:data/demo/data.csv --outputMc:results/a_seq_b_or_c.mc

The file results/a_seq_b_or_c.mc is the serialized Markov model. The final step is to use the FSM and the Markov model to perform forecasting:

$ java -jar cef/target/scala-2.12/wayeb-0.2.0-SNAPSHOT.jar forecasting --modelType:fmm --fsm:results/a_seq_b_or_c.fsm --mc:results/a_seq_b_or_c.mc --stream:data/demo/data.csv --statsFile:results/forestats --threshold:0.5 --maxSpread:10 --horizon:20 --spreadMethod:classify-nextk

The last command should return some classification statistics like precision, f1 and accuracy.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ Back

DECODER_Logo_site.svg

DECODER - doc2json

DEveloper COmpanion for Documented and annotatEd code Reference

▼ Click for campaign details and rewards

DECODER builds an Integrated Development Environment (IDE) that combines information from different sources through formal and semi-formal models to deliver software project intelligence to shorten the learning curve of software programmers and maintainers and increase their productivity.  Developers will deliver high quality code that are more secure and better aligned with requirements and maintainers will immediately know what has been done, how and with what tools.

 

doc2json

Starts on:

07/06/2021

Ends on:

06/11/2021

Estimated Test Duration:

10 minutes

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner, Intermediate

Campaign objectives

doc2json extracts text/data from word/openoffice/excel documents to json format.

With appropriate parsing algorithms provided by the user, it can extract data from any structured documentation. The samples directory contains algorithms to extract the text of a word/openoffice document into a json format that nests the sections of the document. It also contains algorithms to extract data from invoices following the same openoffice template.

The innovative part of this project consists in translating a user's master algorithm that controls the events coming from the documentation into a slave algorithm that can be interrupted and introspected. The resulting parsing library contains many `goto`s to reflect the state machine model as the result of the translation. This part still has bugs; it nevertheless works on the 4 parsing algorithms of the project.

The goal of this campaign is to make sure that the first version of doc2json functions as expected.

Feedback is expected about:

  • Potential usages of doc2json (on a database of documents)
  • The effectiveness and the current limitations of doc2json

Check the current functions offered and try them at the end on your own documents.

Requirements for this campaign

doc2json takes as input a word/openoffice document (like the internal documentation), and extracts the text/the data into json file whose format can be specified by the user.

To install and build doc2json, you will need:
- a Linux environment (only Ubuntu 20.04 has been tested) with the following packages (git, zlib1g-dev, g++, libclang-dev - these packages also exist on Windows, MacOS and a port on these environments is planned in the future)
- or the ability to create Docker images and execute Docker container

The test described below is for Linux and/or docker.

Beta test instructions and scenario

Install doc2json

doc2json is located at https://gitlab.ow2.org/decoder/doc_to_asfm

Linux installation

The application requires zlib https://zlib.net/ to retrieve the content of the documents and Clang Tools https://clang.llvm.org/docs/ClangTools.html to convert the user's parsing algorithm into an interruptible reading algorithm.

To install these libraries, you can type the following commands:

> sudo apt-get install zlib1g-dev clang libclang-dev > apt list --installed "libclang*-dev"

If the clang version is less than clang-10 (for instance clang-6), the next cmake build process may fail and you need to update to clang-10 with the following commands

> sudo apt-get install clang-10 libclang-10-dev > sudo apt-get purge --auto-remove libclang-common-6.0-dev > sudo ln -s /usr/bin/clang-10 /usr/bin/clang

You can also check that llvm provides its own header files to the Clang Tools

> llvm-config --includedir

should answer a path that contains llvm-xx. Fedora, for instance, returns /usr/include, which prevents the Clang Tools to find some headers like <stddef.h> that are required for string manipulation during the source to source transformation. In such a case, you can try the docker installation that is more robust.

Please note that if after having tested doc2json, you need to revert to your original clang 6 version, just type

# do no use these commands before having built the project and the algorithms libraries > sudo rm /usr/bin/clang > sudo apt-get install clang

Then you can download and build the project

> git clone git@gitlab.ow2.org:decoder/doc_to_asfm.git doc2json > cd doc2json > mkdir build > cd build > cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=$PWD .. > make -j 4 # make test is optional = to check if doc2json works on the internal documentation > make test > make install

Docker installation

To download and build it (Docker users) :

> git clone git@gitlab.ow2.org:decoder/doc_to_asfm.git doc2json > cd doc2json > sudo docker build -t doc2json_img . > docker run --name doc2json -it doc2json_img bash # in the docker container > cd /doc2json

Minimal test (to check doc2json works)

Under a Linux system, you can then run some test examples in the build directory, which is also the installation directory, you can type

> ./bin/doc2json ./share/doc2json/libInvoiceReader.so -c ./share/doc2json/config-invoice.xml ../test/invoice.ods -o test/invoice.json > ./bin/doc2json ./share/doc2json/libTextWordReader.so -c ./share/doc2json/config-text-word.xml ../test/StructuredDocument_french.docx -o test/StructuredDocument_french_word.json > ./bin/doc2json ./share/doc2json/libTextOpenofficeReader.so -c ./share/doc2json/config-text-openoffice.xml ../test/StructuredDocument_french.odt -o test/StructuredDocument_french_openoffice.json

Under the Docker container, you can run the same test examples from the /doc2json directory

> ./bin/doc2json ./share/doc2json/libInvoiceReader.so -c ./share/doc2json/config-invoice.xml src/test/invoice.ods -o src/test/invoice.json > ./bin/doc2json ./share/doc2json/libTextWordReader.so -c ./share/doc2json/config-text-word.xml src/test/StructuredDocument_french.docx -o src/test/StructuredDocument_french_word.json > ./bin/doc2json ./share/doc2json/libTextOpenofficeReader.so -c ./share/doc2json/config-text-openoffice.xml src/test/StructuredDocument_french.odt -o src/test/StructuredDocument_french_openoffice.json

to extract the content of the documents in the json format. You can open the document ../test/StructuredDocument_french.docx and compare its content with the result of the extraction, that is the file test/StructuredDocument_french_word.json.

Then, in the build directory for Linux users and in the /doc2json/src directory for Docker users

diff test/StructuredDocument_french_openoffice.json test/StructuredDocument_french_word.json

should show no differences between the two json extracted files even if the origin format (word versus openoffice/opendocument) is very different.

A utility create-reader.sh is provided to generate a parsing library from a custom user's parsing algorithm. Hence the command (in the build directory for Linux users and in the /doc2json directory for docker users)

./bin/create-reader.sh -I . share/doc2json/InvoiceReader.cpp

generates again the parsing library share/doc2json/libInvoiceReader.so from the parsing algorithm share/doc2json/InvoiceReader.cpp. The transformation into a state machine model is generated in the file share/doc2json/InvoiceReaderInstr.cpp.

Apply doc2json to your documents (a text with a title and sections, subsections, subsubsections, ...)

We now suppose that your document is named file.docx or file.odt. It should have a title identified by a specific style.

You may run the test in /tmp with an environment variable DOC2JSON_INSTALL_DIR that refers to the installation directory of doc2json. This is the build directory for Linux users. You can type

export DOC2JSON_INSTALL_DIR=$PWD

in the build directory before trying the test. This environment variable is automatically set for docker users. But you need copy your document into the /tmp directory of the docker container with the command

docker cp .../file.docx doc2json:/tmp

You need to look at the name of the styles the headings of your document.
By default, for a french word document, these styles are Titre, Titre1, Titre2, Titre3, ... For an english word document, they may be Title, Heading1, Heading2, Heading3, ... You need to provide these styles to doc2json by modifying the configuration file config-text-word.xml and by replacing the french Titre by the styles that appear in the style ribbon of Word.

cp $DOC2JSON_INSTALL_DIR/share/doc2json/config-text-word.xml /tmp # edit config-text-word.xml and replace Titre1 by Heading1 or by the styles of the different sections of your document

If the title is not recognized, doc2json will answer "bad generation of the output format!" with a corrupted output json document. This should be modified for a future version.

For the opendocument format, you need to find the style ribbon by clicking on the parameter menu (top right) of libreoffice. Then styles like "Heading 1" should be replaced by "Heading_20_1" in config-text-openoffice.xml - spaces are replaced by "_20_". Sometimes libreoffice renames these styles internally. For instance, it may rename the "Title" style with "P1" or with "P2". The parsing algorithm is not smart enough to recognize this renaming - it will be in a future version. So if the extraction fails, you can unzip your file.odt, and then edit the file content.xml to look for the text of your "Title" and see what is the associated style. Do not spend too much time on that point if it fails.

Then the adequate command

> cd /tmp > $DOC2JSON_INSTALL_DIR/bin/doc2json $DOC2JSON_INSTALL_DIR/share/doc2json/libTextWordReader.so -c /tmp/config-text-word.xml file.docx -o file.json > $DOC2JSON_INSTALL_DIR/bin/doc2json $DOC2JSON_INSTALL_DIR/share/doc2json/libTextOpenofficeReader.so -c /tmp/config-text-openoffice.xml file.odt -o file.json

should extract in file.json the sections and the text of your document.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ Back

Asset 9.png

Parsec

Simply collaborate with complete confidentiality and integrity in the cloud
With innovative security "anti-ransomware available today in an intuitive solution for sharing sensitive data.

▼ Click for campaign details and rewards

Parsec is the secure collaborative solution that provides confidential data sharing and storage in the cloud, whether public or private.
In order to improve the user experience of the solution, we are setting up this series of tests on Reachout that will allow us to evaluate the following points:

- File management features
- Administrative and user management features
- Ergonomics
- Usability
- Interface design

Parsec is available in PC version on Windows, Mac and Linux; and also in Android version soon available for the public.
Our Parsec solution is certified by the ANSSI 

Parsec est la solution collaborative sécurisée qui assure le partage et le stockage des données en toute confidentialité dans le cloud, qu’il soit public ou privé.
Afin d’améliorer l’expérience utilisateur de la solution, nous mettons en place cette série de tests sur Reachout qui nous permettra d’évaluer les points suivants :

• Fonctionnalités gestion des fichiers
• Fonctionnalités de gestions administratives et des utilisateurs
• Ergonomie
• Utilisabilité
• Design interface

Parsec est disponible en version PC sur Windows, Mac et Linux ; et également en version Android bientôt disponible pour le public.
Notre solution Parsec est certifiée par l'ANSSI (Agence nationale de la sécurité des systèmes d'information)

Project website:

 

Parsec PC

Starts on:

23/06/2021

Ends on:

01/11/2021

Estimated Test Duration:

30min

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users

Beta tester level:

Beginner, Intermediate, Advanced

Campaign objectives

In order to improve the user experience of the solution, we are implementing this series of tests on ReachOut which will allow us to evaluate the following points:

  • File management features
  • Administrative and user management functions
  • Ergonomics
  • Usability
  • Interface design

Afin d’améliorer l’expérience utilisateur de la solution, nous mettons en place cette série de tests sur Reachout qui nous permettra d’évaluer les points suivants :

  • Fonctionnalités gestion des fichiers
  • Fonctionnalités de gestions administratives et des utilisateurs
  • Ergonomie
  • Utilisabilité
  • Design interface

Requirements for this campaign

Parsec is available in desktop version for Windows, Mac and Linux; and also in Android version soon available for the public.
This first test will focus on the Parsec desktop version v2.3.1 for Windows, Mac or Linux, downloadable from the Parsec website - Get Parsec.

Parsec vocabulary is specific to the software, all test participants should read the vocabulary in the Parsec User Guide.

You can also watch the video explaining how Parsec works.

For part of the scenario, you will need to invite a second user. You can either use another e-mail address, but you will need a second computer, or invite someone else you know.

Parsec est disponible en version PC pour Windows, Mac et Linux ; et aussi en version Android bientôt disponible pour le public.
Ce premier test portera sur la version PC de Parsec v2.3.1 pour Windows, Mac ou Linux, téléchargeable sur le site Parsec website - Get Parsec.

Le vocabulaire de Parsec est spécifique au logiciel, tous les participants au test doivent lire le vocabulaire dans le Guide de l'utilisateur de Parsec.

Vous pouvez également regarder la vidéo expliquant le fonctionnement de Parsec.

Pour une partie du scénario, vous devrez inviter un deuxième utilisateur. Vous pouvez soit utiliser une autre adresse e-mail, mais vous aurez besoin d'un deuxième ordinateur, soit inviter une autre personne de votre connaissance.

Beta test instructions and scenario

STEP 1: Installation and creation of the working environment

  1. Download Parsec on the Parsec website - Get Parsec.
  2. Install the software on your browser following the instructions received by email. 
  3. Open Parsec on your browser.
  4. Create your organization, your workspaces.

STEP 2 : Invite a new user in your organization

  1. Invite a new user by referring to the UG (user guide). 
  2. Share a workspace with the user.
  3. Test the collaborative file synchronization with your guest by modifying the content of a file. Then check if all the modifications made in the file are taken into account. 
  4. Test the history function on the same file. (see UG) 

STEP 3 : Your workspace and its features

  1. Import files into your workspaces from the Parsec software interface.
  2. Import files into the shared Parsec directory using your PC file explorer. 
  3. Modify a file, save and exit.
  4. Test the different workspace features:
  • Going back in time
  • Sharing files
  • Renaming a file

Scénario du test

ÉTAPE 1 : Installation et création de l’environnement de travail

  1. Télécharger Parsec sur Get Parsec - Parsec 
  2. Installer le logiciel selon votre navigateur en suivant les instructions reçues par mail. 
  3. Ouvrir Parsec sur votre navigateur
  4. Créer votre organisation, vos espaces de travail

ÉTAPE 2 : Inviter un nouvel utilisateur dans votre organisation

  1. Inviter un nouvel utilisateur en vous référant au GU (guide d'utilisation). 
  2. Partager un espace de travail avec l’utilisateur.
  3. Tester la synchronisation collaborative des fichiers avec votre invité en modifiant le contenu d’un fichier. Vérifiez ensuite la prise en compte de toutes les modifications effectuées dans le fichier. 
  4. Testez la fonction historique sur même fichier. (voir GU) 

ÉTAPE 3 : Votre espace de travail et ses fonctionnalités

  1. Importer des fichiers dans vos espaces de travail depuis l’interface du logiciel
  2. Importer des fichiers dans Parsec depuis votre explorateur de fichier PC   
  3. Modifier un fichier, enregistrer et quitter 
  4. Tester les différentes fonctionnalités de l’espace de travail 
  • Remonter dans le temps 
  • Partager des fichiers 
  • Renommer des fichiers 

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ Back

logo_morphemic-300x241.png

MORPHEMIC

MORPHEMIC project proposes an unique way of adapting and optimizing Cloud computing applications.

▼ Click for campaign details and rewards

MORPHEMIC model adaptation is with the extension of the MELODIC project www.melodic.cloud in order to support live application reconfiguration. The former is when a component can run in different technical forms, i.e. in a Virtual Machine (VM), in a container, as a big data job, or as serverless components, etc. The technical form of deployment is chosen during the optimization process to fulfil the user’s requirements and needs. The quality of the deployment is measured by a user defined and application specific utility. Depending on the application’s requirements and its current workload, its components could be deployed in various forms in different environments to maximize the utility of the application deployment and the satisfaction of the user. Proactive adaptation is not only based on the current execution context and conditions but aims to forecast future resource needs and possible deployment configurations. This ensures that adaptation can be done effectively and seamlessly for the users of the application. 

The MORPHEMIC deployment platform will therefore be very beneficial for heterogeneous deployment in distributed environments combining various Cloud levels including Cloud data centres, edge Clouds, 5G base stations, and fog devices. Advanced forecasting methods, including the ES-Hybrid method recently winning the M4 forecasting competition, will be used to achieve the most accurate predictions. The outcome of the project will be implemented in the form of the complete solution, starting from modelling, through profiling, optimization, runtime reconfiguration and monitoring. Then the MORPHEMIC implementation will be integrated as a pre-processor for the existing MELODIC platform extending its deployment and adaptation capabilities beyond the multicloud and cross-cloud to the edge, 5G, and fog. This approach allows for a path to early demonstrations and commercial exploitation of the project results.

 

Modelio CAMEL Designer

Starts on:

30/06/2021

Ends on:

30/11/2021

Estimated Test Duration:

30 mins to 1 hour

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Developers

Beta tester level:

Beginner, Intermediate

Campaign objectives

As Modelio CAMEL Designer is going to be released in the next months we need to make sure that all of its functions work properly.

Requirements for this campaign

In order to install and use our Modelio CAMEL Designer you need to have Java SE version 8 and Modelio 4.1 (see below for setting up Modelio) installed in your system. 

Beta test instructions and scenario

1. Setting up Modelio CAMEL designer

2. Create Camel Model: Steps to complete

  • Create an empty package by right-clicking your root UML project or any other Package -> Create Element -> Package
  • Right-click an empty package to show available commands and click on Camel Designer -> Create element -> Create Camel Model 

Expected results:

An empty Camel Model is created.

3. Create metric type model: Steps to complete

  • Right-click a CAMEL Model to display the list of available commands
  • Click on Camel Designer -> Create element -> Metric_Model 

Expected results:

A metric type model is created inside the CAMEL model.

4. Create software component: Steps to complete

  • Create a Deployment Model: right-click on the CAMEL model -> Camel Designer -> Create element -> Deployment Model
  • Create a Deployment Diagram: right-click the deployment model -> Camel Designer -> Create Diagram -> Deployment model Diagram
  • Open the deployment model diagram
  • In the palette, in the Deployment Type box, select the Software Component icon, then draw a rectangle inside the deployment model diagram to create a Software Component 

Expected results:

A software component is created and displayed in the diagram.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ Back

INFORE-logo.svg

MSA UI

The Maritime Situational Awareness (MSA) application is a web-based platform providing end-users tools for monitoring events, illegal activities and possible threats in the maritime environment.

▼ Click for campaign details and rewards

The Maritime Situational Awareness (MSA) application is a platform providing end-users and decision-making experts, tools and means via an intuitive User Interface (UI) for monitoring and forecasting events, illegal activities and possible threats in the maritime environment.
It has been developed by the EU project http://www.infore-project.eu/.

The back-end of the application relies on advanced big data and AI techniques for information producing synopses of maritime data to improve scalability over large streams of data, (ii) detecting simple and complex maritime events, (iii) forecasting maritime events.  

The aforementioned components are the building blocks of automatic, sophisticated data science workflows that can be designed and executed using RapidMiner Studio. The results of the maritime workflows (i.e., maritime events) of the MSA application are available as Kafka topics and they are displayed to end-users (e.g, mariners, coastguard authorities, VTS officers, etc.) via an interactive web interface. 

The UI of this application is a real-time interactive map where all output data from INFORE models coming as streamlines from Kafka topics are rendered accordingly in the MSA UI. 

The end-users are able to monitor the area of their interest in a real-time manner and inspect all the crucial parameters related with MSA such as:
- Displaying the Latest (current) position of vessels
- Visualization of Simple & Complex Events (proximities between vessels, illegal fishing activities etc.)
- Dynamic visualization of vessels past tracks (trajectories) and past events occurred

 

MSA UI

Starts on:

04/06/2021

Ends on:

01/11/2021

Estimated Test Duration:

15 minutes

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner

Campaign objectives

The MSA UI application aims to provide end-users with useful tools and means via an intuitive user interface to monitor and forecast events, illegal activities and possible threats in maritime environment. The objective of this use case is to provide us with feedback about how feasible and easy it is to monitor crucial aspects of maritime environment such as latest vessels position, occurred events such as proximities between two ships, "in areas" events when a vessel cross specified areas of interest, illegal fishing and more. 

In MSA UI the vessels are visualized with rectangular blue markers while the events are visualized with circle markers accompanied with a small pulse animation; if its a simple event, or with a greater pulse if its a complex event. Below we describe the events that may be occurred:

- Proximity Event: When two ships are close enough to each other
- In Areas Event: When a vessel enters a specified area of interest (e.g. anchorage area)
- AIS Off Event: When the vessel's AIS device is turned off
- Fishing (complex) Event: When a vessel is about to be engaged into illegal fishing activities.

Requirements for this campaign

In order to access the MSA User Interface, after visiting the application's page at https://msa.marinetraffic.com/, you have to fill in the Log in form with these user credentials :

- username: msa.demo.1@marinetraffic.com
- password: 93104276

After the successful log in, the user is able to pan around the map and use the corresponding tools to navigate through the application's UI.

Beta test instructions and scenario

For this use case, we consider an office agent that works at Piraeus Port, in Athens, Greece and needs to have an overview of the vessels position and status located near the port, as well as track any occurred illegal fishing activities in the open sea of Saronic Gulf.

After the successful log in into the MSA UI follow the guidelines described below:

  1. Pan around the map and inspect the relevant sea area to have an overview of the vessels location as well as the events occurring.

2. Click on any vessel or event marker on the map to get more information for that particular marker. Try investigating the vessels destination, speed and vessel type. As concerns the events markers, for some types of events (e.g. proximity & in-areas events) after clicking on the marker you can see that extra geometries are displayed as overlays connecting the vessels that were engaged in a proximity event or the geometry of the area that a vessel entered in that exact time.

3. Using the sidebar tools menu located at the left side of the browser's window viewport, click on the layer's icon button and click between the available data layers to toggle them on-off.

4. By using the Filtering icon button that's underneath layer's button try to filter the vessels by their type. Toggle between any desired vessel type and keep those needed. Do the same for event types accordingly.

5. Find out how many events (simple and Forecast ones) are in this particular sea area. This information is kept under the Events icon button in the left sidebar. Hit the Events icon having the events panel list showing up. There are two tabs, "Simple Events" & "Forecast Events". Now hover upon the events list and click on any event card. The map is zooming in into the event's specific location. Try this process for various types of events.

6. Now span across the map and hit on a vessel marker. In the popup box showing the vessel's description, there's a 'Past track' button. Try clicking on this button to investigate the vessel's past track (trajectory) and look closely to find out any past events that may be occurred in that past trajectory line.

7. Congratulations! Following the above steps you've had an overview of the current maritime situation of this case area by investigating all the current vessels location and occurred events as well as you were able to track any vessels that are about to be engaged into illegal activities such as illegal Fishing.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ Back

DECODER_Logo_site.svg

DECODER - JmlGen

DEveloper COmpanion for Documented and annotatEd code Reference

▼ Click for campaign details and rewards

DECODER builds an Integrated Development Environment (IDE) that combines information from different sources through formal and semi-formal models to deliver software project intelligence to shorten the learning curve of software programmers and maintainers and increase their productivity.  Developers will deliver high quality code that are more secure and better aligned with requirements and maintainers will immediately know what has been done, how and with what tools.

 

JmlGen

Starts on:

09/04/2021

Ends on:

06/11/2021

Estimated Test Duration:

1 hour

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Developers

Beta tester level:

Beginner

Campaign objectives

JmlGen generates JML annotations from what can be guessed out of a java project: the result, a JML-annotated project, can then be processed by JML tools, like the OpenJml program verification tool.
Check the current functions offered and try them on your own java software.

Requirements for this campaign

JmlGen takes as input a java project (the project root directory, like the one out of a "git clone"), and generates JML in java files located in a specified destination directory.

To install and build JmlGen, you will need:
- A java environment (java11 minimum)
- Maven

The test described below is for Linux, but should be adaptable to other platforms (at least, paths in configuration files would probably have to be adapted).

Beta test instructions and scenario

Install JmlGen

JmlGen is located at https://gitlab.ow2.org/decoder/jmlgen .

To download and build it:

$ git clone https://gitlab.ow2.org/decoder/jmlgen.git
$ cd jmlgen
$ mvn clean install

Minimal test (to check JmlGen works)

You can then run a test example, as follows:

$ java -jar target/jmlgen-0.0.1-SNAPSHOT.jar src/main/resources/jmlgen.properties

Some java code with JML inside should be produced in /tmp/jmlgen (the original java code is in src/test/java/eu/decoder/sample_jmlgen/ directory).
To see it, for example:

$ cat /tmp/jmlgen/src/test/java/eu/decoder/sample_jmlgen/Sample*

(note that you may customize the output directory by editing src/main/resources/jmlgen.properties, and changing the value of "target" property).

Apply JmlGen to a real project

We will take as example the OW2 sat4j project (https://gitlab.ow2.org/sat4j).

You may run the test in /tmp:

$ cd /tmp
$ git clone https://gitlab.ow2.org/sat4j/sat4j.git

Now create a JmlGen configuration file (let's say, /tmp/sat4j.properties), with the following content (you may copy/paste it):

root: /tmp/sat4j
destination: /tmp/sat4j-JML
sourcepath: org.sat4j.br4cp/src/main/java:org.sat4j.core/src/main/java:org.sat4j.intervalorders/src/main/java:org.sat4j.maxsat/src/main/java:org.sat4j.pb/src/main/java:org.sat4j.sat/src/main/java:org.sat4j.sudoku/src/main/java

(Note: sourcepath lists all source folders, separated with colons; in many java projects, it would simply be set to "src/main/java").

Go back to the directory where you installed JmlGen, and run it:

$ java -jar target/jmlgen-0.0.1-SNAPSHOT.jar /tmp/sat4j.properties

You should see logs in the console, that detail where JML annotations have been inserted: go in some corresponding files (under /tmp/sat4j-JML) to discover JML annotations.

For example, the following command should display some JML annotations inserted in the SuDoku.java sample of Sat4j:

$ cat /tmp/sat4j-JML/org.sat4j.sudoku/src/main/java/org/sat4j/apps/sudoku/SuDoku.java | grep "/*@"

Note some annotations can be of immediate interest: for example, "non_null" annotations reflect that a method result should be tested to null, as the method was called without check (for a call like "method1().method2()", JmlGen would annotate "method1()" as "non_null", which denotes a risk of null pointer exception). A plain text search for "non_null" annotations, without any analysis tool, can be profitable.

Apply JmlGen to your own project

Now you are ready to use JmlGen on your own! And report bugs/issues at https://gitlab.ow2.org/decoder/jmlgen/-/issues.

When done, use any 3rd-party JML tool (like OpenJml) to perform analysis of your java code, now instrumented by JmlGen.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ Back

Intuite_logotype_transp.png

INTUITE_AI

Our mission is to unleash the power of sensitive data. We created software capable of generating realistic artificial data to enable safe data sharing between companies.

▼ Click for campaign details and rewards

There is an ongoing conflict around customers data. On the one hand, customers want their privacy protected and fear the adverse consequences that might arise from improper or malevolent use of their data. On the other hand, companies have the need to analyze their customer’s data (become “data driven”) to remain competitive globally.
The de-facto standard technique to mitigate this problem, data anonymization, was proven to be inadequate in truly preserving customer’s privacy, while simultaneously reducing data utility since its principle of operation is based on information destruction.
We propose a novel approach for privacy preserving data analysis based on synthetic data. We plan on using a new trend in machine learning to create a dataset that’s fully synthetic, i.e. does not contain data of real people or entities, but yields the same results upon statistical analysis. Because it does not contain real data, it is privacy-preserving and GDPR compliant.
Synthetic datasets surpass anonymized data both in terms of security and utility. We want to make this technique available to the market so that customers can benefit from added safety regarding their data while companies can increase their competitiveness.

Project website:

 

INTUITE_AI - Generate realistic artificial data

Starts on:

23/11/2020

Ends on:

06/11/2021

Estimated Test Duration:

30 mins to 1 hour

Reward for this campaign:  

30€

Incentives

1.T-shirts, stickers, pens
2.Communication promotion via our social media channels
3.Also, Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner, Intermediate

Campaign objectives

The objective of this beta test is not focused on assessing the data generation performances but instead, the goal is to assess whether all the system functionalities work fine.

Requirements for this campaign

The product can be tested ideally by everyone. However, a bit of data knowledge would be beneficial.

Beta test instructions and scenario

The software aims at creating artificial tabular data. The system takes as input a table and automatically trains a Machine Learning model capable of generating an artificial copy. The synthetic version retains the same statistical properties but is void of sensitive information.

The process is composed of 4 steps:

1.Register and login (http://app.intuite.ai/)
After filling in the registration form, you will receive a link to confirm your email. Upon verification of your email, a password will be sent to you.
2.Load Data
3.Train model
4.Synthesize new data
5.Download data

The user guide is available at:

https://docs.google.com/document/d/1QEh5upzahsgrKTORPP9UBRp_EZiky1rJ35fAk2zNMU8/edit?usp=sharing

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ Back

Recent Completed Campaigns

smooth.png

SMOOTH

Assisting Micro Enterprises to adopt and be compliant with GDPR

▼ Click for campaign details and rewards

SMOOTH project assists Micro enterprises to adopt and be compliant with the General Data Protection Regulation (GDPR) by designing and implementing easy-to-use and affordable tools to generate awareness on their GDPR obligations and analysing their level of compliance with the new data protection regulation.

 

SMOOTH Market Pilot

Estimated Test Duration:

20-35min

Incentives

1) A free GDPR compliance report including a series of recommendations to improve your company’s compliance with the GDPR.
  
2) Be compliant, avoids potential fines. The lack of awareness, expertise and resources make small enterprises the most vulnerable institutions towards a strict enforcement of the GDPR. 

3) Build up your brand reputation with clients and network by showing you have adequate solutions in place to protect their data.

Also, Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Lottery, and 24 randomly chosen Beta Testers will be awarded a money prize in recognition.

Target beta testers profile:

Business users

Beta tester level:

Beginner

Campaign objectives

The objectives of this campaign for the SMOOTH project is to reach out to 500 micro-enterprises to complete the market pilot. 

Requirements for this campaign

Micro-enterprises (employ fewer than 10 persons and whose annual turnover and/or annual balance sheet total does not exceed EUR 2 million) 

or Small (SME): Enterprises that employ fewer than 50 persons and whose annual turnover and/or annual balance sheet total does not exceed EUR 10 million, excluding enterprises that qualify as micro-enterprises.

Beta test instructions and scenario

Please read carefully these instructions before completing the Questionnaires.

To connect to the SMOOTH platform and perform the test, please use this link.

Campaign Mailing List

▲ Back

DataBench-toolbox-icon.png

DataBench Toolbox

Based on existing efforts in big data benchmarking, the DataBench Toolbox provides a unique environment to search, select and deploy big data benchmarking tools and knowledge about benchmarking

▼ Click for campaign details and rewards

At the heart of DataBench is the goal to design a benchmarking process helping European organizations developing BDT to reach for excellence and constantly improve their performance, by measuring their technology development activity against parameters of high business relevance.

DataBench will investigate existing Big Data benchmarking tools and projects, identify the main gaps and provide a robust set of metrics to compare technical results coming from those tools.

Project website:

 

Generation of architectural Pipelines-Blueprints

Estimated Test Duration:

30 minutes plus mapping to blueprints that requires desk analysis

Incentives

As a recognition for your efforts and useful feedback, you will be added as a DataBench contributor within our Website, your blueprint published, and the authorship of your contribution acknowledged in the Toolbox. This offer is limited to the beta testers interacting with the team, by 15 December 2020. You will be contacted individually for contribution opportunities. Please, provide a valid contact email during the survey phase and in the form for suggestions of new blueprints.

Also, Beta Testers will be offered to be added to the ReachOut Hall of fame, will take part in the ReachOut Lottery, and 16 randomly selected beta testers providing a fully returned questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Developers

Beta tester level:

Advanced

Campaign objectives

DataBench has released the DataBench Toolbox, a one-stop shop for big data and AI benchmarking. It offers a catalogue of existing benchmarking tools and information about technical and business benchmarking. 

This campaign (extended until the end of January 2021) aims at getting content in the form of new architectural big data/AI blueprints mapped to the BDV reference model and the DataBench pipeline/blueprint. In this campaign we focus mainly on advanced users that would like to contribute with practical examples of mapping their architectures to the generic blueprints. The results will be published in the DataBench Toolbox acknowledging the ownership and can be used by the owners for their own purposes in their projects/organizations to claim their efforts in mapping with existing standardization efforts in the community.

Note that we provide information about the BDV Reference Model, the four steps of the DataBench Generic data pipeline  (data acquisition, preparation, analysis and visualization/interaction), and the generic big data blueprint  devised in DataBench, as well as some examples and best practices to provide the mappings . Testers should study the available DataBench information and guidelines. Then using the provided steps testers should prepare their own mappings, resulting diagrams and explanations, if any. The Toolbox provides a web form interface to upload all relevant materials that will be later assessed by an editorial board in DataBench before the final publication in the Toolbox.

Requirements for this campaign

- Having a big data/AI architecture in place in your project/organization
- Willingness to provide mappings from your architecture to be part of the DataBench pipeline/blueprints
- Basic Knowledge of web browsing
- Internet connection
- Use preferably Google Chrome

For any inquiry regarding to this campaign, please write an email to databenchtoolbox@gmail.com.

Beta test instructions and scenario

The Toolbox is accessible without the need to log in to the system, but the options are limited to pure search. You can see that without registering the options in the menu are very few. To perform this campaign, we would like all involved users to first sign in into the DataBench Toolbox to get a user profile that you will use throughout the campaign:

- Go to https://databench.ijs.si/ and click on “Sign up” option located at the top of the page on the right side.

- Fill in the form to generate your new user by providing a username and password of your choice, your organization, email, and your user type (at least Technical for this exercise).

Once you have created your user, please sign in with it to the Toolbox. You will be directed to the Toolbox main page again, where you could see that you have more options available. 

Besides the options available through the menu, the main page provides:
A) a carrousel with links,
B) User journeys for users of different profiles: Technical, Business and Benchmarks providers,
C) Videos aimed at these 3 types of users explaining briefly the main functionalities offered for each of them,
D) Shortcuts to some of the of the functionalities, such as FAQ, access to the benchmarks or knowledge catalogues, the DataBench Observatory, etc. 

A) Get information about DataBench pipelines and blueprints

This campaign aims at providing you the means to search and browse existing data pipelines and the explanations on how to map your own architecture to efforts such as the BDV Reference model, the DataBench Framework and the mappings with existing initiatives. 

We encourage you to first go to the Technical user journey  accessible from the front-page of the Toolbox, read it and follow the links given to you to get acquainted with the entries related to blueprints and pipelines. In the “Advanced” user journey you will find the following:

- Link to the DataBench Framework and it relation to the BDV Reference Model, where you can find an introduction to the different elements that composes the DataBench approach towards technical benchmarking.

- Link to the DataBench Generic Pipeline , where an explanation of the 4 main steps in data pipelines are explained. These 4 steps are the basic building blocks for the mappings to other blueprints and existing initiatives.

- User Journey - Generic Big Data Analytics Blueprint : This is the main piece of information that you need to understand what we mean by mapping an existing architecture to our pipelines and blueprints. You will find links to the generic pipeline figure.

- Practical example of creating a blueprint and derived cost-effectiveness analysis: Targeting the Telecommunications Industry .

- Ways to report your suggestions for new blueprints, by using the Suggest blueprint/pipeline option  under the Knowledge Nuggets menu 

Below is a summary of the minimal set of actions we encourage you to do:

  1. Go to the User journeys area of the main page and click on “Technical”.

    2. Go to the link to the User Journey: Generic Big Data Analytics Blueprint  at the bottom of the “Advanced” area of the page. 

3. Read and understand the different elements of the pipeline (the 4 steps) and the elements of the generic blueprint as described in the previous link.

4. Check examples of already existing blueprints. In order to do that use the search box located at the top right corner and type “blueprint”. Browse through the blueprints. 

B) Desk analysis

Once you are familiar with the DataBench Toolbox and the main concepts related to the blueprints, you need to do some homework. You should try to map your own architecture to the DataBench pipeline and the generic blueprint. We suggest the following steps:

- Prepare a figure with the architecture you have in mind in your project/organization. 

- Create links to the 4 steps of the data pipeline and generate a new figure showing the mapping.

- Create links to the Generic Big Data Analytics Blueprint  figure and generate a new figure showing the mappings. In order to do so you might use the generic pipeline figure and particularize to your components as it was done in the example provided for the Telecommunications Industry 

C) Upload your blueprint to the Toolbox

- Upload your files as pdf or images by using the Form of suggestion of blueprints   available from the Knowledge Nuggets menu. Try to include a description with a few words about the sector of application of your blueprint, main technical decisions or anything you might find interesting to share. 

- The DataBench project will revise the blueprints and publish them into the platform acknowledging your authorship. 

Congratulations! You have completed the assignment of this campaign! Go now to fill in the feedback questionnaire. Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

 

Finding the right benchmarks for technical and business users

Estimated Test Duration:

30 to 40 minutes

Incentives

As a recognition for your efforts and useful feedback, you will be added as a DataBench contributor within our Website. This offer is limited to the beta testers interacting with the team, by 6 December 2020. You will be contacted individually for contribution opportunities. Please, provide a valid contact email during the survey phase.

Also, Beta Testers will be offered to be added to the ReachOut Hall of fame, will take part in the ReachOut Lottery, and 16 randomly selected beta testers providing a fully returned questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users, Developers

Beta tester level:

Intermediate

Campaign objectives

DataBench has released the DataBench Toolbox, a one-stop shop for big data and AI benchmarking. It offers a catalogue of existing benchmarking tools and information about technical and business benchmarking. 

This campaign aims at getting feedback of the usage of the Tool and the user interface of the web front-end of the Toolbox. The Toolbox provides a set of user journeys, or suggestions, for three kind of users: 1) Technical user (people interested in technical benchmarking), 2) Business users (interested in finding facts, tools, examples and solutions to make business choices), and 3) Benchmark providers (users from benchmarking communities or that generated their own benchmarks). In this campaign we focus mainly on technical and business users. We provide some minimal instructions for these two types of users to understand if finding information in the Toolbox is not a cumbersome process and getting your feedback. The idea is to use the user journeys drafted in the Toolbox to drive this search process and understand if users find this information enough to kick-start the process of finding the right benchmark and knowledge they were looking for.

Requirements for this campaign

- Previous knowledge about Big Data or AI
- Basic Knowledge of web browsing
- Internet connection
- Use preferably Google Chrome

For any inquiry regarding a this campaign, please write an email to databenchtoolbox@gmail.com.

Beta test instructions and scenario

The Toolbox is accessible without the need to log in to the system, but the options are limited to pure search. You can see that without registering the options in the menu are very few. 

Initial steps to log in as a Toolbox user

To perform this campaign, we would like all involved users to first sign in into the DataBench Toolbox and create  a user profile that you will use throughout the campaign:

- Go to http://databench.ijs.si/ and click on “Sign up” option located at the top of the page on the right side.
- Fill in the form to generate your new user by providing an username and password of your choice, your organization, email, and your user type (Technical and/or Business, depending on your preferences and skills).

Once you have created your user, please sign in with it into the Toolbox. You will be directed to the Toolbox main page again, where you could check that you have more options available. 

Besides the options available through the menu, the main page provides:
A) a carrousel with links,
B) User journeys for users of different profiles: Technical, Business and Benchmarks providers,
C) Videos aimed at these 3 types of users explaining briefly the main functionalities offered for each of them,
D) Shortcuts to some of the functionalities, such as FAQ, access to the benchmarks or knowledge catalogues, the DataBench Observatory, etc. 

A) For Technical Users

This campaign aims at using the user journeys as starting point to help you navigating the tool. We encourage you to click on the Technical user journey, read it and follow the provided links to get acquainted with the tool and what you can do with it. Get used to the main two catalogues: the benchmarks catalogue (tools for big data and AI benchmarking), and the knowledge nuggets catalogue (providing information about technical and business aspects related to benchmarking and big data technologies). Learn about existing big data architectural blueprints and browse some of them. 

Additionally, if you already have a goal in your mind (i.e. finding a benchmark for testing a specific ML model, or compare the characteristics of different NoSQL databases), we encourage you to try to find the appropriate benchmark and report your conclusions later in the questionnaire. 

Below is a summary of the minimal set of actions we encourage you to do:

  1. Go to the User journeys area of the main page and click on “Technical”. 

2. Read the content of this page, divided into advice for “Beginners” (first-time users) and “Advanced” (providing extra recommendations of what to do next). Focus first on the “Beginners” area and click on the different links to browse to the different options to get used to the tool. We recommend you to come back to the User journey page  until you click on all the available options for beginners, but feel free to stray and use the navigation and links from other pages to get used to the tool. After you finish clicking on all the options for beginners, you should have seen the benchmarks and knowledge nuggets catalogues, used some of the search functionalities and browsed some of the existing architectural blueprints. You are now ready to go further!

3. Focus now on the “Advanced” area of the User journey page 

- Here you will find ways to suggest new content via web forms (i.e. new benchmarks you might know that are missing in the catalogue, a version of a big data blueprint you are dealing with in a project, or a new knowledge nugget based on your experience). We are not expecting you to fill-in these forms at this stage, but just acknowledge their potential value (and feel free to contribute any time).  

- You will find also links to specific more advanced user journeys or practical examples at the end of the advanced user journeys. Click on the ones that take your attention and start navigating via the links offered by them. From this moment we expect that you know the main options of the Toolbox and how to navigate and browse through it. You should have noted by now that both benchmarks and knowledge nuggets are annotated or categorized with clickable tags, which makes navigation through related items possible.  

4. Get used to the search functionalities. The Toolbox offers 4 types of search:
- Search text box located at the top right corner of the pages. This is a full text search. You can enter any text and the results matching that text from both the benchmark and knowledge nuggets catalogues will appear.

- Search by “BDV Reference Model”  option from the menu allows you to have a look at the model created by the BDV PPP community (check the BDV SRIA  for more details). The model is represented graphically and is clickable. If you click in any of the vertical or horizontal layers of the model you will be directed to the benchmarks and/or knowledge annotated in the Toolbox to these layers. Browse through this search.

- Search by “Guided benchmark search” . In simple terms this is a search by the tags used to annotate benchmarks and knowledge nuggets. These tags range from technical to business aspects. You can click on the categories of tags to find related information. Browse to some of the options of this search.  

- Finally, the “Search by Blueprint/Pipeline”  option allows a search that presents graphically a generic architectural blueprint developed in DataBench with the most common elements of a big data architecture. The blueprint is aligned with 4 steps of a DataBench Generic data pipeline  (data acquisition, preparation, analysis and visualization/interaction). The graphic is clickable both at the level of the four steps of the pipeline and in some of the detailed elements of the blueprint. Click on the parts of the diagram you are interested to find a list of existing benchmarks and nuggets related to it. Browse some of them. There are nuggets that show a summary of existing big data tools for each of the elements of the pipeline. See if you find it easy to browse through the results .

Congratulations! You have completed the assignment of this campaign! Go now to fill in the feedback questionnaire. 

NOTE – Some of the available benchmarks can be deployed and run in your premises. Those are listed first in the Benchmark catalogue and when you click on them you will find the configuration file at the bottom of their description. If you want to run any of them, you should have dedicated infrastructure to do so. We are not expecting you to do so in this exercise.

B) For Business users

As for technical users, this campaign aims at using the user journeys as starting point to help you navigating the tool. We encourage you to click on the Business user journey, read it and follow the links given to you to get acquainted with the tool and what you can do with it. Get used to the main two catalogues: the benchmarks catalogue (tools for big data and AI benchmarking), but mainly to the knowledge nuggets catalogue (providing information about technical and business aspects related to benchmarking and big data technologies). Learn about existing big data architectural blueprints and browse to some of them, as they apply to different industries and might be of interest for business purposes.

Additionally, if you already have a goal in your mind (i.e. finding most widely used business KPIs in a specific sector), we encourage you to try to find the appropriate information in the knowledge nugget catalogue and report your conclusions later in the questionnaire. 

Below there is a summary of the minimal set of actions we encourage you to do:

  1. Go to the User journeys area of the main page and click on “Business”. 

2. Read the content of this page, divided into advice for “Beginners” (first-time users) and “Advanced” (providing extra recommendations for what to do next). Focus first on the “Beginners” area and click on the different links to browse to the different options to get used to the tool. We recommend you to come back to this User journey page  until you click on all the available options for beginners, but feel free to stray and use the navigation and links from other pages to get used to the tool. After finishing clicking on all the options for beginners, you should have seen the benchmarks and knowledge nuggets catalogues, used some of the search functionalities and browsed some of the existing architectural blueprints. You are now ready to go further!

3. Focus now on the “Advanced” area of the User journey page .
- You will find links to different elements, such as nuggets related to business KPIs, by industry, etc. Browse through them and follow the links.

- You will find ways to suggest new content via web forms (i.e. a new knowledge nugget based on your experience). We are not expecting you to fill-in these forms at this stage, but just acknowledge their potential value (but feel free to contribute any time).  

- You will find also links to specific more advanced user journeys or practical examples at the end of the advanced user journeys. Click on the ones that take your attention and start navigating via the links offered by them. From this moment we expect that you know the main options of the Toolbox and how to navigate and browse through it. You should have noted by now that both benchmarks and knowledge nuggets are annotated or categorized with clickable tags, which makes navigation through related items possible.  

5. Get used to the search functionalities. The Toolbox offers 4 types of search:
- Search text box located at the top right corner of the pages. This is a full text search. You can enter any text and the results, matching that text from both the benchmark and knowledge nuggets catalogues, will appear.

- Search by “BDV Reference Model”  option from the menu allows you to have a look at the model created by the BDV PPP community (check the BDV SRIA  for more details). The model is represented graphically and is clickable. If you click on any of the vertical or horizontal layers of the model you will be directed to the benchmarks and/or knowledge annotated in the Toolbox to these layers. Browse through this search.

- Search by “Guided benchmark search” . In simple terms this is a search by the tags used to annotate benchmarks and knowledge nuggets. These tags range from technical to business aspects. You can click on the categories of tags to find related information. Browse to some of the options of this search.  

Finally, the “Search by Blueprint/Pipeline”  option allows a search that presents graphically a generic architectural blueprint developed in DataBench with the most common elements of a big data architecture. The blueprint is aligned with 4 steps of a DataBench Generic data pipeline  (data acquisition, preparation, analysis and visualization/interaction). The graphic is clickable both at the level of the four steps of the pipeline and in some of the detailed elements of the blueprint. Click on the parts of the diagram you are interested to find a list of existing benchmarks and nuggets related to it. Browse some of them. There are nuggets that show a summary of existing big data tools for each of the elements of the pipeline. See if you find them it easy to browse through the results .
6. This part of the test is not guided, as we expect you to navigate through the options you have seen previously. Once you know how to navigate, try to find information of interest for your industry or area of interest:
• Try to find information about the most widely used KPIs or interesting use cases.
• Try to find information about architectural blueprints for your inspiration.  

Congratulations! You have completed the assignment of this campaign! Go now to fill in the feedback questionnaire.

Campaign Mailing List

▲ Back

STAMP_Logo_RGB_small.svg

STAMP

Software Testing AMPlification for the DevOps Team

▼ Click for campaign details and rewards

STAMP stands for Software Testing AMPlification. Leveraging advanced research in automatic test generation, STAMP aims at pushing automation in DevOps one step further through innovative methods of test amplification. 

STAMP reuses existing assets (test cases, API descriptions, dependency models), in order to generate more test cases and test configurations each time the application is updated. Acting at all steps of development cycle, STAMP techniques aim at reducing the number and cost of regression bugs at unit level, configuration level and production stage.

STAMP raises confidence and foster adoption of DevOps by the European IT industry. The project gathers four academic partners with strong software testing expertise, five software companies (in: e-Health, Content Management, Smart Cities and Public Administration), and an open source consortium. This industry-near research addresses concrete, business-oriented objectives.

 

Try the STAMP toolset

Estimated Test Duration:

2 hours

Incentives

You'll have nothing to lose and everything to win, including time and quality in your software releases!
Moreover, you'll be among the first to experiment the most advanced Java software testing tools.

And, as a recognition for your efforts and useful feedback, you will receive a limited edition “STAMP Software Test Pilot” gift and be added as a STAMP contributor. This offer is limited to the beta testers interacting with the team, by 30 October 2019. You will be contacted individually for a customized gift and for contribution opportunities. Please, provide a valid contact email.

Target beta testers profile:

Developers

Beta tester level:

Beginner

Campaign objectives

Trying the open source toolset is a free initiative that will amplify your testing efforts automatically. Experiment DSpot, Descartes, CAMP or Botsing now.

Requirements for this campaign

Download and try DSpot or Descartes or CAMP or Botsing.

Beta test instructions and scenario

Campaign Mailing List

▲ Back

logo_wide.png

Energyshield - Security Culture Assessment tool

EnergyShield is a complete state-of-the-art security toolkit for the EPES sector

▼ Click for campaign details and rewards

EnergyShield captures the needs of Electrical Power and Energy System (EPES) operators and combines the latest technologies for vulnerability assessment, supervision and protection to draft a defensive toolkit.Adapt and improve available building tools (assessment, monitoring & protection, remediation) in order to support the needs of the EPES sector.Integrate the improved cybersecurity tools in a holistic solution with assessment, monitoring/protection and learning/sharing capabilities that work synergistically.Validate the practical value of the EnergyShield toolkit in demonstrations involving EPES stakeholders.Develop best practices, guidelines and methodologies supporting the deployment of the solution and encourage widespread adoption of the project results in the EPSE sector.

 

Energyshield SBAM Tool

Estimated Test Duration:

between 20 minutes to 30

Incentives

Beta testers will be acknowledged within our website

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner

Campaign objectives

Energyshield has created a first version of the security culture assessment tool. We would like to beta test this first version

Requirements for this campaign

No requirements except internet connection and browser - all browser types and devices are acceptable

Beta test instructions and scenario

For the beta testing campaign - create a user group in the tool, create a campaign, answer a questionnaire and review the results of the assessment. The URL of the website is (http://energyshield.epu.ntua.gr/) . Information and guide of the platform is included here: (https://1drv.ms/w/s!Avx-hU-EvNxviEse2KU6hPqEoY4O?e=Hn5byP)

Campaign Mailing List

▲ Back

2871.png

Carsharing Use Case

Car-sharing is a form of person-to-person or collaborative consumption, whereby existing owners rent their cars to other people for short period of time.

▼ Click for campaign details and rewards

Car-sharing is a form of person-to-person or collaborative consumption, whereby existing owners rent their cars to other people for short period of time. Essentially, this use case provides a collaborative business model as an alternate to private car ownership allowing customers to use a vehicle temporarily on-demand at a variable fee depending on the distance travelled or usage

Project website:

 

Beta-tester Passengers

Estimated Test Duration:

30-45 minutes

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users

Beta tester level:

Intermediate

Campaign objectives

The objective of this campaign is to adapt the use case to the market which Agilia Center is aiming to cope with, finding insights that can be transformed into functionalities which can be integrated into the development phase.
After this step, we will include these prerequisites in the roadmap of the service (Service Backlog) for the acceptance tests that will be carried out at the completion of the development stage. The process to test the requirements will encompass a methodology to not only test the previous features but also to extract new information.

Requirements for this campaign

  • Android Device (Android Pie 9)
  • Allow app to installs from Unknown Sources in Android
  • Internet Connection
  • Turn on the location on the phone or use a Fake GPS Application such as Fake GPS location or Fake GPS Free

Beta test instructions and scenario

Introduction

From now, you are going to act as a passenger, which is a person who wants to share a car with other people (at least with a driver) for a short period of time from one point to another.

Instructions

Instructions will be provided within the survey. Please, go to the survey

Campaign Mailing List

 

Beta-tester Vehicle Owner

Estimated Test Duration:

1-2 hours

Reward for this campaign:  

30€

Target beta testers profile:

Business users

Beta tester level:

Intermediate

Campaign objectives

The objective of this campaignis to adapt the use case to the market which Agilia Center is aiming to cope with, finding insights that can be transformed into functionalities which can be integrated into the development phase.
Specifically, the end user we want to target with this campaign is the vehicle owner who can rent out their vehicles getting money from them. The car sharing/carpooling platform allows these user tu submit their vehicle setting ut the price and the escrow for its use.

Requirements for this campaign

  • Android Device (Android Pie 9)
  • Allow app to installs from Unknown Sources in Android
  • Internet Connection
  • Turn on the location or use a Fake GPS Application such as Fake GPS location or Fake GPS Free

Beta test instructions and scenario

Introduction

From now, you are going to act as a individual owner, which is a person who has a vehicle and wants share costs allowing other users to rent it for a short period of time.

Download and Install the Mobile App

  1. Download .apk from https://drive.google.com/file/d/1xXMWmUq4D5UzTbzt83fazpkWSkeeGQ9R/view?usp=sharing.
  2. Install the Android App.
  3. Run the Carsharing App.
  4. Allow Carsharing to access location: Press Allow all the time.
  5. Allow Carsharing to access photos, media and files: Press Allow.

Sign up and Login

Now you are going to create your credentials. Since the Agilia solution is based on a permissioned Blockchain network, the credentials are couple of certificates. These certificates are encrypted and stored in your mobile using a password.

  1. Press Create Certificate.
  2. Fill a Name and a Password.
  3. Press Create
  4. Logout:
    1. Open the burger menu.
    2. Press Exit.
  5. Import certificates of a created user:
    1. Press Import Certificate.
    2. Find the Certificate file in /Android/data/com.carsharing/files.
    3. Fill the Password that you have entered previously in order to decrypt your certificates.
    4. Press Submit.

Create a new Vehicle

You need to register your vehicle in the application to allow other users to rent it. 

  1. Navigate to the Vehicles Screen (Second option of the burger menu).
  2. Create a new vehicle:
    1. Press the + button.
    2. Fill, at least, the required fields: License Plate, Brand, Model, Colour, Seats, Year, Vehicle State. If you fill the vehicle state as BAD, your vehicle can not be rented.
    3. Press the Save button.
  3. Now, in the vehicle list, you can see the car that you have created.

Create an Offer

You need to create an offer in order show to other users that your car is available to be rented.

  1. Navigate to the My Offers Screen (Third option of the burger menu).
  2. Create a new offer:
    1. Press the + button.
    2. Fill, at least, the required fields: License Plate, Price for KM, Price For Time, Start Date, End Date, Escrow and Start Place.
    3. Press the Save button.
  3. Now, in the My offers list, you can see the offer that you have created.

Watch trips related to your vehicle

You can see the trips associated to your vehicles after, at least, one driver reserve a trip with your car.

  1. Navigate to the Vehicles Screen (Second option of the burger menu).
  2. Find the vehicle that you want to inspect (you can use the filters).
  3. Press the vehicle.
  4. Press the ... button (blue button) and the eye option (green color).
  5. This screen shows the trips related to your selected vehicle.
  6. Press a the trip that you want to see the details.

Withdraw credit (CSCoins)

After at least one trip finished you can see that your CSCoins has increased. Then you can withdraw your CSCoins. 1 CSCoins equals 1 Euro

  1. Press the CSCoins Button (In the header bar in the top right-hand).
  2. Press Withdraw CSCoins.
  3. Fill the email of your Paypal Sandbox account. Please, select one of the following accounts:
    1. sb-ejr771011751@personal.example.com
    2. sb-fxu4391011730@personal.example.com.
  4. Fill the amount.
  5. Press Withdraw.
  6. When the Paypal workflow is finished, press OK.
  7. The transaction can take some minutes. Please, refresh the screen (pull-to-refres gesture).

Campaign Mailing List

▲ Back


more completed campaigns

Latest Upcoming Campaigns

TRIPLE

The GoTriple platform is an innovative multilingua...

Safe-DEED

A competitive Europe where individuals and compani...

OpenSensorWeb

OPEN·SENSOR·WEB is a data exploration platform for...

OpenDroP

OpenDroP ist eine Rechercheplattform für UAV-und D...

AmenesikCloudEngine

The Amenesik Cloud Engine is an industrialized rew...

 

The Beta-Testing Campaign Platform for Research Projects.

What is ReachOut main objective? ReachOut helps H2020 projects in the area of software technologies to develop beta-testing campaigns for their software. ReachOut helps build bridges between projects and their markets. ReachOut provides projects with end-to-end support to develop and launch beta-testing campaigns so as to enable them to concretely engage with their potential users and develop their ecosystems. 


news.svg

News and Events


 

whatisit.svg

What is Beta-Testing?

Beta testing is intended to collect feedback from customers on a pre-release product to improve its quality. This is the last stage before shipping a product. Not only it helps finalize a product, it is also a marketing tactic that helps develop a base of early adopters. 


community_icon.svg

Community

Be part of the growing ReachOut community. Subscribe here to receive new campaigns, best practices, and recommendations.

envelope_icon.svg

Contact Us

Do not hesitate to write to us directly for any other questions, proposals or partnership enquiries.



Partner Projects

Beneficiaries of H2020 cascade funding projects are welcome to join ReachOut. More.

  • EDI-final
  • NGI_Ledger
  • NGI_Pointer
  • NGI_DAPSI_Tag-color-positive.jpg

The Reachout project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement number 825307.

flagEU.svgThe information in this document is provided “as is”, and no guarantee or warranty is given that the information is fit for any particular purpose. The content of this document reflects only the author`s view – the European Commission is not responsible for any use that may be made of the information it contains. The users use the information at their sole risk and liability.

This wiki is licensed under a Creative Commons 4.0 license
XWiki Enterprise 12.10.7 - Documentation

    

The Reachout project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement number 825307.

flagEU.svgThe information in this document is provided “as is”, and no guarantee or warranty is given that the information is fit for any particular purpose. The content of this document reflects only the author`s view – the European Commission is not responsible for any use that may be made of the information it contains. The users use the information at their sole risk and liability.

This wiki is licensed under a Creative Commons 4.0 license
XWiki Enterprise 12.10.7 - Documentation