Completed Campaigns



Wayeb

Wayeb

Wayeb is a Complex Event Processing and Forecasting (CEP/F) engine written in Scala. It is based on symbolic automata and Markov models.

▼ Click for campaign details and rewards

Wayeb is a Complex Event Processing and Forecasting (CEP/F) engine written in Scala. It is based on symbolic automata and Markov models.

 

Wayeb

Estimated Test Duration:

30 min

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Developers

Beta tester level:

Beginner

Campaign objectives

As Wayeb is going to be released in the next months we need to make sure that all of its functions work properly in the maritime and in the bio domain. 

Requirements for this campaign

In order to build Wayeb from the source code you need to have Java SE version 8 or higher and SBT installed in your system. Java 8 is recommended.
More details about the building process you can find here https://github.com/ElAlev/Wayeb/blob/main/README.md

Beta test instructions and scenario

The tests that should be done are about the building, the recognition and the forecast processes.

1)Building: First donwnload Wayeb from here https://github.com/ElAlev/Wayeb. Assuming $WAYEB_HOME is the root directory of Wayeb:

$ cd $WAYEB_HOME

Then build a fat jar:

$ sbt assembly
If it prints a success message, it passes the test.

2) Recognition: In $WAYEB_HOME/data/demo/data.csv you may find a very simple dataset, consisting of 100 events. The event type is either A, B or C. In $WAYEB_HOME/patterns/demo/a_seq_b_or_c.sre you may find a simple complex event definition for the above dataset. It detects an event of type A followed by another event of type B or C. If we want to run this pattern over the stream, we must first compile this pattern into an automaton (make sure you have created a results folder under $WAYEB_HOME):

$ java -jar cef/target/scala-2.12/wayeb-0.2.0-SNAPSHOT.jar compile --patterns:patterns/demo/a_seq_b_or_c.sre --outputFsm results/a_seq_b_or_c.fsm

Now, results/a_seq_b_or_c.fsm is the produced serialized finite state machine. Note that we also provided as input a declarations.sre file. This file simply lets the engine know that the three predicates IsEventTypePredicate(A), IsEventTypePredicate(B) and IsEventTypePredicate(C) are mutually exclusive (i.e., an event can have only one type). This helps the compiler create a more compact automaton. We can use this FSM to perform event recognition on this simple dataset:

$ java -jar cef/target/scala-2.12/wayeb-0.2.0-SNAPSHOT.jar recognition --fsm:results/a_seq_b_or_c.fsm --stream:data/demo/data.csv --statsFile:results/recstats

If it prints information about the throughput and the number of matches, it recognizes the pattern in the stream and it passes the test. 

3) Forecasting: For forecasting, we first need to use a training dataset in order to learn a probabilistic model for the FSM. For this simple guide, we will use $WAYEB_HOME/data/demo/data.csv both as a training and as a test dataset, solely for convenience. Normally, you should use different datasets.

We first run maximum likelihood estimation:

$ java -jar cef/target/scala-2.12/wayeb-0.2.0-SNAPSHOT.jar mle --fsm:results/a_seq_b_or_c.fsm --stream:data/demo/data.csv --outputMc:results/a_seq_b_or_c.mc

The file results/a_seq_b_or_c.mc is the serialized Markov model. The final step is to use the FSM and the Markov model to perform forecasting:

$ java -jar cef/target/scala-2.12/wayeb-0.2.0-SNAPSHOT.jar forecasting --modelType:fmm --fsm:results/a_seq_b_or_c.fsm --mc:results/a_seq_b_or_c.mc --stream:data/demo/data.csv --statsFile:results/forestats --threshold:0.5 --maxSpread:10 --horizon:20 --spreadMethod:classify-nextk

The last command should return some classification statistics like precision, f1 and accuracy.

Campaign Mailing List

▲ Back

HeliosLogo_final.png

Build your own Social Media APP

HELIOS provides a toolkit for P2P Social Media applications. Come and build your own social media APP!

▼ Click for campaign details and rewards

We are providing tools to develop novel social media applications for Android. The tools do not only contain basic messaging, but also other features like communication in contexts and information overload control. There also other optional modules available in the tools.

The aim now is to download HELIOS tools and try out how it goes. You need basic Android programming skills.

To get started, we are providing you a sample APP to build, with source codes in GitHub. It should be built with a recent version of Android Studio, to a minimum Android version of 9.

Tutorial videos are available here: https://helios-social.com/helios-for-devs/tutorials/

You'll find detailed how-to-build instructions in GitHub
 https://github.com/helios-h2020/h.app-TestClient

-

HELIOS has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement N° 825585

 

Build your own Social Media APP

Estimated Test Duration:

4 hours

Reward for this campaign:  

60€

Incentives

By participation, you are in the forerunners of developing next generation of Social Media that no longer depends on corporations that currently run social media platforms.

As a sign of gratitude, we will offer 12x 60€ rewards for trying out the toolkit and reporting your experiences.

We will close the campaign after 10 participants have answered the survey. Therefore, try to proceed swiftly to guarantee your reward.

Also, Beta Testers will be offered to be added to the ReachOut "Hall of Fame", and will automatically take part in the ReachOut end-of-project Super Prize.

Target beta testers profile:

Developers

Beta tester level:

Intermediate

Campaign objectives

HELIOS has released source codes to build P2P social media applications. We would like to have feedback on building a sample APP from the sources.

Requirements for this campaign

- Android Studio
- Android phone, version 9 and above, with network access (preferable)
- basic Android programming skills

Tutorial videos are available here: https://helios-social.com/helios-for-devs/tutorials/

You'll find detailed how-to-build instructions in GitHub
https://github.com/helios-h2020/h.app-TestClient

Beta test instructions and scenario

1) Get familiar with HELIOS instructions
2) Download sample codes and relevant libraries
3) Build the sample APP into a APK (you may modify the sample if you wish)
4) Install the APK to the Android phone (in the absence of a phone, you can use an emulator)
5) Verify with the Android phone that the app launches OK.
6) Send a unique message (such as a random number) to "BugChat" channel in the APP and do the following steps:
  - In the app, open the options menu (the three dots in the right-hand side after title)
  - Tap “Discover others”
  - Search for your nick name and WRITE DOWN the first 6 characters of your ID (it is in the format “nickname @ID”)
7) Fill in the survey and let us know
  - what was the message you sent, with rough time/date information, and
  - the first 6 characters of your ID

Campaign Mailing List

▲ Back

DECODER_Logo_site.svg

DECODER - doc2json

DEveloper COmpanion for Documented and annotatEd code Reference

▼ Click for campaign details and rewards

DECODER builds an Integrated Development Environment (IDE) that combines information from different sources through formal and semi-formal models to deliver software project intelligence to shorten the learning curve of software programmers and maintainers and increase their productivity.  Developers will deliver high quality code that are more secure and better aligned with requirements and maintainers will immediately know what has been done, how and with what tools.

 

doc2json

Estimated Test Duration:

10 minutes

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner, Intermediate

Campaign objectives

doc2json extracts text/data from word/openoffice/excel documents to json format.

With appropriate parsing algorithms provided by the user, it can extract data from any structured documentation. The samples directory contains algorithms to extract the text of a word/openoffice document into a json format that nests the sections of the document. It also contains algorithms to extract data from invoices following the same openoffice template.

The innovative part of this project consists in translating a user's master algorithm that controls the events coming from the documentation into a slave algorithm that can be interrupted and introspected. The resulting parsing library contains many `goto`s to reflect the state machine model as the result of the translation. This part still has bugs; it nevertheless works on the 4 parsing algorithms of the project.

The goal of this campaign is to make sure that the first version of doc2json functions as expected.

Feedback is expected about:

  • Potential usages of doc2json (on a database of documents)
  • The effectiveness and the current limitations of doc2json

Check the current functions offered and try them at the end on your own documents.

Requirements for this campaign

doc2json takes as input a word/openoffice document (like the internal documentation), and extracts the text/the data into json file whose format can be specified by the user.

To install and build doc2json, you will need:
- a Linux environment (only Ubuntu 20.04 has been tested) with the following packages (git, zlib1g-dev, g++, libclang-dev - these packages also exist on Windows, MacOS and a port on these environments is planned in the future)
- or the ability to create Docker images and execute Docker container

The test described below is for Linux and/or docker.

Beta test instructions and scenario

Install doc2json

doc2json is located at https://gitlab.ow2.org/decoder/doc_to_asfm

Linux installation

The application requires zlib https://zlib.net/ to retrieve the content of the documents and Clang Tools https://clang.llvm.org/docs/ClangTools.html to convert the user's parsing algorithm into an interruptible reading algorithm.

To install these libraries, you can type the following commands:

> sudo apt-get install zlib1g-dev clang libclang-dev > apt list --installed "libclang*-dev"

If the clang version is less than clang-10 (for instance clang-6), the next cmake build process may fail and you need to update to clang-10 with the following commands

> sudo apt-get install clang-10 libclang-10-dev > sudo apt-get purge --auto-remove libclang-common-6.0-dev > sudo ln -s /usr/bin/clang-10 /usr/bin/clang

You can also check that llvm provides its own header files to the Clang Tools

> llvm-config --includedir

should answer a path that contains llvm-xx. Fedora, for instance, returns /usr/include, which prevents the Clang Tools to find some headers like <stddef.h> that are required for string manipulation during the source to source transformation. In such a case, you can try the docker installation that is more robust.

Please note that if after having tested doc2json, you need to revert to your original clang 6 version, just type

# do no use these commands before having built the project and the algorithms libraries > sudo rm /usr/bin/clang > sudo apt-get install clang

Then you can download and build the project

> git clone git@gitlab.ow2.org:decoder/doc_to_asfm.git doc2json > cd doc2json > mkdir build > cd build > cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=$PWD .. > make -j 4 # make test is optional = to check if doc2json works on the internal documentation > make test > make install

Docker installation

To download and build it (Docker users) :

> git clone git@gitlab.ow2.org:decoder/doc_to_asfm.git doc2json > cd doc2json > sudo docker build -t doc2json_img . > docker run --name doc2json -it doc2json_img bash # in the docker container > cd /doc2json

Minimal test (to check doc2json works)

Under a Linux system, you can then run some test examples in the build directory, which is also the installation directory, you can type

> ./bin/doc2json ./share/doc2json/libInvoiceReader.so -c ./share/doc2json/config-invoice.xml ../test/invoice.ods -o test/invoice.json > ./bin/doc2json ./share/doc2json/libTextWordReader.so -c ./share/doc2json/config-text-word.xml ../test/StructuredDocument_french.docx -o test/StructuredDocument_french_word.json > ./bin/doc2json ./share/doc2json/libTextOpenofficeReader.so -c ./share/doc2json/config-text-openoffice.xml ../test/StructuredDocument_french.odt -o test/StructuredDocument_french_openoffice.json

Under the Docker container, you can run the same test examples from the /doc2json directory

> ./bin/doc2json ./share/doc2json/libInvoiceReader.so -c ./share/doc2json/config-invoice.xml src/test/invoice.ods -o src/test/invoice.json > ./bin/doc2json ./share/doc2json/libTextWordReader.so -c ./share/doc2json/config-text-word.xml src/test/StructuredDocument_french.docx -o src/test/StructuredDocument_french_word.json > ./bin/doc2json ./share/doc2json/libTextOpenofficeReader.so -c ./share/doc2json/config-text-openoffice.xml src/test/StructuredDocument_french.odt -o src/test/StructuredDocument_french_openoffice.json

to extract the content of the documents in the json format. You can open the document ../test/StructuredDocument_french.docx and compare its content with the result of the extraction, that is the file test/StructuredDocument_french_word.json.

Then, in the build directory for Linux users and in the /doc2json/src directory for Docker users

diff test/StructuredDocument_french_openoffice.json test/StructuredDocument_french_word.json

should show no differences between the two json extracted files even if the origin format (word versus openoffice/opendocument) is very different.

A utility create-reader.sh is provided to generate a parsing library from a custom user's parsing algorithm. Hence the command (in the build directory for Linux users and in the /doc2json directory for docker users)

./bin/create-reader.sh -I . share/doc2json/InvoiceReader.cpp

generates again the parsing library share/doc2json/libInvoiceReader.so from the parsing algorithm share/doc2json/InvoiceReader.cpp. The transformation into a state machine model is generated in the file share/doc2json/InvoiceReaderInstr.cpp.

Apply doc2json to your documents (a text with a title and sections, subsections, subsubsections, ...)

We now suppose that your document is named file.docx or file.odt. It should have a title identified by a specific style.

You may run the test in /tmp with an environment variable DOC2JSON_INSTALL_DIR that refers to the installation directory of doc2json. This is the build directory for Linux users. You can type

export DOC2JSON_INSTALL_DIR=$PWD

in the build directory before trying the test. This environment variable is automatically set for docker users. But you need copy your document into the /tmp directory of the docker container with the command

docker cp .../file.docx doc2json:/tmp

You need to look at the name of the styles the headings of your document.
By default, for a french word document, these styles are Titre, Titre1, Titre2, Titre3, ... For an english word document, they may be Title, Heading1, Heading2, Heading3, ... You need to provide these styles to doc2json by modifying the configuration file config-text-word.xml and by replacing the french Titre by the styles that appear in the style ribbon of Word.

cp $DOC2JSON_INSTALL_DIR/share/doc2json/config-text-word.xml /tmp # edit config-text-word.xml and replace Titre1 by Heading1 or by the styles of the different sections of your document

If the title is not recognized, doc2json will answer "bad generation of the output format!" with a corrupted output json document. This should be modified for a future version.

For the opendocument format, you need to find the style ribbon by clicking on the parameter menu (top right) of libreoffice. Then styles like "Heading 1" should be replaced by "Heading_20_1" in config-text-openoffice.xml - spaces are replaced by "_20_". Sometimes libreoffice renames these styles internally. For instance, it may rename the "Title" style with "P1" or with "P2". The parsing algorithm is not smart enough to recognize this renaming - it will be in a future version. So if the extraction fails, you can unzip your file.odt, and then edit the file content.xml to look for the text of your "Title" and see what is the associated style. Do not spend too much time on that point if it fails.

Then the adequate command

> cd /tmp > $DOC2JSON_INSTALL_DIR/bin/doc2json $DOC2JSON_INSTALL_DIR/share/doc2json/libTextWordReader.so -c /tmp/config-text-word.xml file.docx -o file.json > $DOC2JSON_INSTALL_DIR/bin/doc2json $DOC2JSON_INSTALL_DIR/share/doc2json/libTextOpenofficeReader.so -c /tmp/config-text-openoffice.xml file.odt -o file.json

should extract in file.json the sections and the text of your document.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

Campaign Mailing List

▲ Back

smooth.png

SMOOTH

Assisting Micro Enterprises to adopt and be compliant with GDPR

▼ Click for campaign details and rewards

SMOOTH project assists Micro enterprises to adopt and be compliant with the General Data Protection Regulation (GDPR) by designing and implementing easy-to-use and affordable tools to generate awareness on their GDPR obligations and analysing their level of compliance with the new data protection regulation.

 

SMOOTH Market Pilot

Estimated Test Duration:

20-35min

Incentives

1) A free GDPR compliance report including a series of recommendations to improve your company’s compliance with the GDPR.
  
2) Be compliant, avoids potential fines. The lack of awareness, expertise and resources make small enterprises the most vulnerable institutions towards a strict enforcement of the GDPR. 

3) Build up your brand reputation with clients and network by showing you have adequate solutions in place to protect their data.

Also, Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Lottery, and 24 randomly chosen Beta Testers will be awarded a money prize in recognition.

Target beta testers profile:

Business users

Beta tester level:

Beginner

Campaign objectives

The objectives of this campaign for the SMOOTH project is to reach out to 500 micro-enterprises to complete the market pilot. 

Requirements for this campaign

Micro-enterprises (employ fewer than 10 persons and whose annual turnover and/or annual balance sheet total does not exceed EUR 2 million) 

or Small (SME): Enterprises that employ fewer than 50 persons and whose annual turnover and/or annual balance sheet total does not exceed EUR 10 million, excluding enterprises that qualify as micro-enterprises.

Beta test instructions and scenario

Please read carefully these instructions before completing the Questionnaires.

To connect to the SMOOTH platform and perform the test, please use this link.

Campaign Mailing List

▲ Back

TRIPLE_LOGO-Title_FINAL-transparentbackground.png

TRIPLE

The GoTriple platform is an innovative multilingual and multicultural discovery solution for the social sciences and humanities (SSH).

▼ Click for campaign details and rewards

TRIPLE stands for Transforming Research through Innovative Practices for Linked Interdisciplinary Exploration. The GoTriple platform will provide a single access point that allows you to explore, find, access and reuse materials such as literature, data, projects and researcher profiles at European scale.
It is based on the Isidore search engine developed by Huma-Num (unit of CNRS).
A prototype will be released in autumn 2021.
It will be one of the dedicated services of OPERAS, the research infrastructure supporting open scholarly communication in the social sciences and humanities in the European Research Area.

 

GoTriple Beta Testing

Estimated Test Duration:

Around 30 minutes

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner, Intermediate, Advanced

Campaign objectives

The Triple Project has released a beta version of its Discovery platform GoTriple, with an initial set of features. More features will be added in the coming months, till March 2022. The aim of this campaign is to test the beta platform, pick up any usability issues and improve the platform so that when we release the final version, it will meet the needs of the end-users. 

Requirements for this campaign

Ideally, you will have a background in Social Science and Humanities and have knowledge of searching for information and research material, such as scientific publications.

Beta test instructions and scenario

The beta version of the software can be accessed at https://www.gotriple.eu  

Instructions:

Test 1: Goal to find which authors published the most on a topic

  1. Go to the GoTriple Beta testing platform via the above web address
  2. Enter a search term of your choice 
  3. Browse the results of the search
  4. Select the 'Visual' View of results 
  5. Explore the visual view elements
  6. Refine the results to show papers from just one of the disciplines provided 
  7. Clear the refinement to show results from all disciplines again
  8. Find which authors published the most on this topic
  9. Click on an author name to view other publications from this author.

Test 2: Goal to produce a Knowledge Map and examine it

  1. Make a new search on 'Society + Covid'
  2. Refine the results to show only papers published in 2020 
  3. Clear the 2020 selection
  4. Find book chapters published on this topic (same search Society + Covid)
  5. Clear the book chapter selection to return to the overall search list
  6. Create a Knowledge Map for this search (be patient it takes a bit of time!) 
  7. Examine the knowledge map and see the grouped publications 
  8. Return to Home Page

Test 3 Goal: Examine Disciplines and produce a Streamgraph

  1. Examine the Disciplines Tab - try clicking on any that are of interest to you
  2. View a list of publications from a discipline 
  3. Use the filter to refine the results shown
  4. Return to the Home page
  5. Make a new search on the term 'Co-design'
  6. View the Streamgraph for this search   
  7. Examine the results of the Streamgraph
  8. Visit the GOTRIPLE tab to view project information 

Campaign Mailing List

▲ Back

DataBench-toolbox-icon.png

DataBench Toolbox

Based on existing efforts in big data benchmarking, the DataBench Toolbox provides a unique environment to search, select and deploy big data benchmarking tools and knowledge about benchmarking

▼ Click for campaign details and rewards

At the heart of DataBench is the goal to design a benchmarking process helping European organizations developing BDT to reach for excellence and constantly improve their performance, by measuring their technology development activity against parameters of high business relevance.

DataBench will investigate existing Big Data benchmarking tools and projects, identify the main gaps and provide a robust set of metrics to compare technical results coming from those tools.

Project website:

 

Generation of architectural Pipelines-Blueprints

Estimated Test Duration:

30 minutes plus mapping to blueprints that requires desk analysis

Incentives

As a recognition for your efforts and useful feedback, you will be added as a DataBench contributor within our Website, your blueprint published, and the authorship of your contribution acknowledged in the Toolbox. This offer is limited to the beta testers interacting with the team, by 15 December 2020. You will be contacted individually for contribution opportunities. Please, provide a valid contact email during the survey phase and in the form for suggestions of new blueprints.

Also, Beta Testers will be offered to be added to the ReachOut Hall of fame, will take part in the ReachOut Lottery, and 16 randomly selected beta testers providing a fully returned questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Developers

Beta tester level:

Advanced

Campaign objectives

DataBench has released the DataBench Toolbox, a one-stop shop for big data and AI benchmarking. It offers a catalogue of existing benchmarking tools and information about technical and business benchmarking. 

This campaign (extended until the end of January 2021) aims at getting content in the form of new architectural big data/AI blueprints mapped to the BDV reference model and the DataBench pipeline/blueprint. In this campaign we focus mainly on advanced users that would like to contribute with practical examples of mapping their architectures to the generic blueprints. The results will be published in the DataBench Toolbox acknowledging the ownership and can be used by the owners for their own purposes in their projects/organizations to claim their efforts in mapping with existing standardization efforts in the community.

Note that we provide information about the BDV Reference Model, the four steps of the DataBench Generic data pipeline  (data acquisition, preparation, analysis and visualization/interaction), and the generic big data blueprint  devised in DataBench, as well as some examples and best practices to provide the mappings . Testers should study the available DataBench information and guidelines. Then using the provided steps testers should prepare their own mappings, resulting diagrams and explanations, if any. The Toolbox provides a web form interface to upload all relevant materials that will be later assessed by an editorial board in DataBench before the final publication in the Toolbox.

Requirements for this campaign

- Having a big data/AI architecture in place in your project/organization
- Willingness to provide mappings from your architecture to be part of the DataBench pipeline/blueprints
- Basic Knowledge of web browsing
- Internet connection
- Use preferably Google Chrome

For any inquiry regarding to this campaign, please write an email to databenchtoolbox@gmail.com.

Beta test instructions and scenario

The Toolbox is accessible without the need to log in to the system, but the options are limited to pure search. You can see that without registering the options in the menu are very few. To perform this campaign, we would like all involved users to first sign in into the DataBench Toolbox to get a user profile that you will use throughout the campaign:

- Go to https://databench.ijs.si/ and click on “Sign up” option located at the top of the page on the right side.

- Fill in the form to generate your new user by providing a username and password of your choice, your organization, email, and your user type (at least Technical for this exercise).

Once you have created your user, please sign in with it to the Toolbox. You will be directed to the Toolbox main page again, where you could see that you have more options available. 

Besides the options available through the menu, the main page provides:
A) a carrousel with links,
B) User journeys for users of different profiles: Technical, Business and Benchmarks providers,
C) Videos aimed at these 3 types of users explaining briefly the main functionalities offered for each of them,
D) Shortcuts to some of the of the functionalities, such as FAQ, access to the benchmarks or knowledge catalogues, the DataBench Observatory, etc. 

A) Get information about DataBench pipelines and blueprints

This campaign aims at providing you the means to search and browse existing data pipelines and the explanations on how to map your own architecture to efforts such as the BDV Reference model, the DataBench Framework and the mappings with existing initiatives. 

We encourage you to first go to the Technical user journey  accessible from the front-page of the Toolbox, read it and follow the links given to you to get acquainted with the entries related to blueprints and pipelines. In the “Advanced” user journey you will find the following:

- Link to the DataBench Framework and it relation to the BDV Reference Model, where you can find an introduction to the different elements that composes the DataBench approach towards technical benchmarking.

- Link to the DataBench Generic Pipeline , where an explanation of the 4 main steps in data pipelines are explained. These 4 steps are the basic building blocks for the mappings to other blueprints and existing initiatives.

- User Journey - Generic Big Data Analytics Blueprint : This is the main piece of information that you need to understand what we mean by mapping an existing architecture to our pipelines and blueprints. You will find links to the generic pipeline figure.

- Practical example of creating a blueprint and derived cost-effectiveness analysis: Targeting the Telecommunications Industry .

- Ways to report your suggestions for new blueprints, by using the Suggest blueprint/pipeline option  under the Knowledge Nuggets menu 

Below is a summary of the minimal set of actions we encourage you to do:

  1. Go to the User journeys area of the main page and click on “Technical”.

    2. Go to the link to the User Journey: Generic Big Data Analytics Blueprint  at the bottom of the “Advanced” area of the page. 

3. Read and understand the different elements of the pipeline (the 4 steps) and the elements of the generic blueprint as described in the previous link.

4. Check examples of already existing blueprints. In order to do that use the search box located at the top right corner and type “blueprint”. Browse through the blueprints. 

B) Desk analysis

Once you are familiar with the DataBench Toolbox and the main concepts related to the blueprints, you need to do some homework. You should try to map your own architecture to the DataBench pipeline and the generic blueprint. We suggest the following steps:

- Prepare a figure with the architecture you have in mind in your project/organization. 

- Create links to the 4 steps of the data pipeline and generate a new figure showing the mapping.

- Create links to the Generic Big Data Analytics Blueprint  figure and generate a new figure showing the mappings. In order to do so you might use the generic pipeline figure and particularize to your components as it was done in the example provided for the Telecommunications Industry 

C) Upload your blueprint to the Toolbox

- Upload your files as pdf or images by using the Form of suggestion of blueprints   available from the Knowledge Nuggets menu. Try to include a description with a few words about the sector of application of your blueprint, main technical decisions or anything you might find interesting to share. 

- The DataBench project will revise the blueprints and publish them into the platform acknowledging your authorship. 

Congratulations! You have completed the assignment of this campaign! Go now to fill in the feedback questionnaire. Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

 

Finding the right benchmarks for technical and business users

Estimated Test Duration:

30 to 40 minutes

Incentives

As a recognition for your efforts and useful feedback, you will be added as a DataBench contributor within our Website. This offer is limited to the beta testers interacting with the team, by 6 December 2020. You will be contacted individually for contribution opportunities. Please, provide a valid contact email during the survey phase.

Also, Beta Testers will be offered to be added to the ReachOut Hall of fame, will take part in the ReachOut Lottery, and 16 randomly selected beta testers providing a fully returned questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users, Developers

Beta tester level:

Intermediate

Campaign objectives

DataBench has released the DataBench Toolbox, a one-stop shop for big data and AI benchmarking. It offers a catalogue of existing benchmarking tools and information about technical and business benchmarking. 

This campaign aims at getting feedback of the usage of the Tool and the user interface of the web front-end of the Toolbox. The Toolbox provides a set of user journeys, or suggestions, for three kind of users: 1) Technical user (people interested in technical benchmarking), 2) Business users (interested in finding facts, tools, examples and solutions to make business choices), and 3) Benchmark providers (users from benchmarking communities or that generated their own benchmarks). In this campaign we focus mainly on technical and business users. We provide some minimal instructions for these two types of users to understand if finding information in the Toolbox is not a cumbersome process and getting your feedback. The idea is to use the user journeys drafted in the Toolbox to drive this search process and understand if users find this information enough to kick-start the process of finding the right benchmark and knowledge they were looking for.

Requirements for this campaign

- Previous knowledge about Big Data or AI
- Basic Knowledge of web browsing
- Internet connection
- Use preferably Google Chrome

For any inquiry regarding a this campaign, please write an email to databenchtoolbox@gmail.com.

Beta test instructions and scenario

The Toolbox is accessible without the need to log in to the system, but the options are limited to pure search. You can see that without registering the options in the menu are very few. 

Initial steps to log in as a Toolbox user

To perform this campaign, we would like all involved users to first sign in into the DataBench Toolbox and create  a user profile that you will use throughout the campaign:

- Go to http://databench.ijs.si/ and click on “Sign up” option located at the top of the page on the right side.
- Fill in the form to generate your new user by providing an username and password of your choice, your organization, email, and your user type (Technical and/or Business, depending on your preferences and skills).

Once you have created your user, please sign in with it into the Toolbox. You will be directed to the Toolbox main page again, where you could check that you have more options available. 

Besides the options available through the menu, the main page provides:
A) a carrousel with links,
B) User journeys for users of different profiles: Technical, Business and Benchmarks providers,
C) Videos aimed at these 3 types of users explaining briefly the main functionalities offered for each of them,
D) Shortcuts to some of the functionalities, such as FAQ, access to the benchmarks or knowledge catalogues, the DataBench Observatory, etc. 

A) For Technical Users

This campaign aims at using the user journeys as starting point to help you navigating the tool. We encourage you to click on the Technical user journey, read it and follow the provided links to get acquainted with the tool and what you can do with it. Get used to the main two catalogues: the benchmarks catalogue (tools for big data and AI benchmarking), and the knowledge nuggets catalogue (providing information about technical and business aspects related to benchmarking and big data technologies). Learn about existing big data architectural blueprints and browse some of them. 

Additionally, if you already have a goal in your mind (i.e. finding a benchmark for testing a specific ML model, or compare the characteristics of different NoSQL databases), we encourage you to try to find the appropriate benchmark and report your conclusions later in the questionnaire. 

Below is a summary of the minimal set of actions we encourage you to do:

  1. Go to the User journeys area of the main page and click on “Technical”. 

2. Read the content of this page, divided into advice for “Beginners” (first-time users) and “Advanced” (providing extra recommendations of what to do next). Focus first on the “Beginners” area and click on the different links to browse to the different options to get used to the tool. We recommend you to come back to the User journey page  until you click on all the available options for beginners, but feel free to stray and use the navigation and links from other pages to get used to the tool. After you finish clicking on all the options for beginners, you should have seen the benchmarks and knowledge nuggets catalogues, used some of the search functionalities and browsed some of the existing architectural blueprints. You are now ready to go further!

3. Focus now on the “Advanced” area of the User journey page 

- Here you will find ways to suggest new content via web forms (i.e. new benchmarks you might know that are missing in the catalogue, a version of a big data blueprint you are dealing with in a project, or a new knowledge nugget based on your experience). We are not expecting you to fill-in these forms at this stage, but just acknowledge their potential value (and feel free to contribute any time).  

- You will find also links to specific more advanced user journeys or practical examples at the end of the advanced user journeys. Click on the ones that take your attention and start navigating via the links offered by them. From this moment we expect that you know the main options of the Toolbox and how to navigate and browse through it. You should have noted by now that both benchmarks and knowledge nuggets are annotated or categorized with clickable tags, which makes navigation through related items possible.  

4. Get used to the search functionalities. The Toolbox offers 4 types of search:
- Search text box located at the top right corner of the pages. This is a full text search. You can enter any text and the results matching that text from both the benchmark and knowledge nuggets catalogues will appear.

- Search by “BDV Reference Model”  option from the menu allows you to have a look at the model created by the BDV PPP community (check the BDV SRIA  for more details). The model is represented graphically and is clickable. If you click in any of the vertical or horizontal layers of the model you will be directed to the benchmarks and/or knowledge annotated in the Toolbox to these layers. Browse through this search.

- Search by “Guided benchmark search” . In simple terms this is a search by the tags used to annotate benchmarks and knowledge nuggets. These tags range from technical to business aspects. You can click on the categories of tags to find related information. Browse to some of the options of this search.  

- Finally, the “Search by Blueprint/Pipeline”  option allows a search that presents graphically a generic architectural blueprint developed in DataBench with the most common elements of a big data architecture. The blueprint is aligned with 4 steps of a DataBench Generic data pipeline  (data acquisition, preparation, analysis and visualization/interaction). The graphic is clickable both at the level of the four steps of the pipeline and in some of the detailed elements of the blueprint. Click on the parts of the diagram you are interested to find a list of existing benchmarks and nuggets related to it. Browse some of them. There are nuggets that show a summary of existing big data tools for each of the elements of the pipeline. See if you find it easy to browse through the results .

Congratulations! You have completed the assignment of this campaign! Go now to fill in the feedback questionnaire. 

NOTE – Some of the available benchmarks can be deployed and run in your premises. Those are listed first in the Benchmark catalogue and when you click on them you will find the configuration file at the bottom of their description. If you want to run any of them, you should have dedicated infrastructure to do so. We are not expecting you to do so in this exercise.

B) For Business users

As for technical users, this campaign aims at using the user journeys as starting point to help you navigating the tool. We encourage you to click on the Business user journey, read it and follow the links given to you to get acquainted with the tool and what you can do with it. Get used to the main two catalogues: the benchmarks catalogue (tools for big data and AI benchmarking), but mainly to the knowledge nuggets catalogue (providing information about technical and business aspects related to benchmarking and big data technologies). Learn about existing big data architectural blueprints and browse to some of them, as they apply to different industries and might be of interest for business purposes.

Additionally, if you already have a goal in your mind (i.e. finding most widely used business KPIs in a specific sector), we encourage you to try to find the appropriate information in the knowledge nugget catalogue and report your conclusions later in the questionnaire. 

Below there is a summary of the minimal set of actions we encourage you to do:

  1. Go to the User journeys area of the main page and click on “Business”. 

2. Read the content of this page, divided into advice for “Beginners” (first-time users) and “Advanced” (providing extra recommendations for what to do next). Focus first on the “Beginners” area and click on the different links to browse to the different options to get used to the tool. We recommend you to come back to this User journey page  until you click on all the available options for beginners, but feel free to stray and use the navigation and links from other pages to get used to the tool. After finishing clicking on all the options for beginners, you should have seen the benchmarks and knowledge nuggets catalogues, used some of the search functionalities and browsed some of the existing architectural blueprints. You are now ready to go further!

3. Focus now on the “Advanced” area of the User journey page .
- You will find links to different elements, such as nuggets related to business KPIs, by industry, etc. Browse through them and follow the links.

- You will find ways to suggest new content via web forms (i.e. a new knowledge nugget based on your experience). We are not expecting you to fill-in these forms at this stage, but just acknowledge their potential value (but feel free to contribute any time).  

- You will find also links to specific more advanced user journeys or practical examples at the end of the advanced user journeys. Click on the ones that take your attention and start navigating via the links offered by them. From this moment we expect that you know the main options of the Toolbox and how to navigate and browse through it. You should have noted by now that both benchmarks and knowledge nuggets are annotated or categorized with clickable tags, which makes navigation through related items possible.  

5. Get used to the search functionalities. The Toolbox offers 4 types of search:
- Search text box located at the top right corner of the pages. This is a full text search. You can enter any text and the results, matching that text from both the benchmark and knowledge nuggets catalogues, will appear.

- Search by “BDV Reference Model”  option from the menu allows you to have a look at the model created by the BDV PPP community (check the BDV SRIA  for more details). The model is represented graphically and is clickable. If you click on any of the vertical or horizontal layers of the model you will be directed to the benchmarks and/or knowledge annotated in the Toolbox to these layers. Browse through this search.

- Search by “Guided benchmark search” . In simple terms this is a search by the tags used to annotate benchmarks and knowledge nuggets. These tags range from technical to business aspects. You can click on the categories of tags to find related information. Browse to some of the options of this search.  

Finally, the “Search by Blueprint/Pipeline”  option allows a search that presents graphically a generic architectural blueprint developed in DataBench with the most common elements of a big data architecture. The blueprint is aligned with 4 steps of a DataBench Generic data pipeline  (data acquisition, preparation, analysis and visualization/interaction). The graphic is clickable both at the level of the four steps of the pipeline and in some of the detailed elements of the blueprint. Click on the parts of the diagram you are interested to find a list of existing benchmarks and nuggets related to it. Browse some of them. There are nuggets that show a summary of existing big data tools for each of the elements of the pipeline. See if you find them it easy to browse through the results .
6. This part of the test is not guided, as we expect you to navigate through the options you have seen previously. Once you know how to navigate, try to find information of interest for your industry or area of interest:
• Try to find information about the most widely used KPIs or interesting use cases.
• Try to find information about architectural blueprints for your inspiration.  

Congratulations! You have completed the assignment of this campaign! Go now to fill in the feedback questionnaire.

Campaign Mailing List

▲ Back

ENSURESEC_logo_00A.png

ENSURESEC

ENSURESEC addresses the whole gamut of modern e‑commerce, from standard physical products purchased online and delivered via post, to entirely virtual products or services delivered online.

▼ Click for campaign details and rewards

Online shopping and payment is followed by product delivery in physical, online or virtual form. As a service of services, the current e-commerce ecosystem is booming. Cyber and physical threats are also rising. The EU-funded ENSURESEC project will improve the EU’s vision of a reliable and trusted digital single market. It will develop innovations applicable to any critical infrastructure that relies on and is monitored by networked software systems. Focusing on the full range of modern e-commerce (from standard physical products purchased online and delivered via post to entirely virtual products or services delivered online), the project will address threats ranging from malicious modification of web e-commerce applications to delivery issues or fraud committed by insiders or customers. It will also launch a campaign to inform SMEs and citizens about the threats.

ENSURESEC is a sociotechnical solution for safeguarding the Digital Single Market’s e-commerce operations against cyber and physical threats. It combines an automatic, rigorous, distributed and open-source toolkit for protecting e-commerce, with monitoring of the impact of threats in physical space and a campaign for training SMEs and citizens aimed at creating awareness and trust. 

ENSURESEC addresses the whole gamut of modern e-commerce, from standard physical products purchased online and delivered via post, to entirely virtual products or services delivered online. It addresses threats ranging from maliciously modifying web e-commerce applications or rendering them unavailable to legitimate customers, to delivery issues or fraud committed by insiders or customers. It achieves this by focusing on the common software and physical sensor interfaces that sit along the e-commerce, payment and delivery ecosystem. 

At technical level, it integrates proven state-of-the-art inductive (machine learning) with deductive (formal methods) reasoning tools and techniques so that e-commerce operations are protected by design, as well as through continuous monitoring, response, recovery and mitigation measures at run-time. Importantly, trust of the infrastructure’s operations among its users is established, benefiting from distributed ledger technology ensuring transparency of the operations and that information has not been modified. 

Although ENSURESEC innovations are applicable to any critical infrastructure that relies and is monitored by networked software systems, its design and integration philosophy make it uniquely prepared to protect distributed and evolving e-commerce infrastructures with its various forms of payment and delivery (virtual, online and physical). 

ENSURESEC also enhances citizens’ resilience to threats and their trust in e-commerce companies, especially SMEs, thus contributing towards the vision of a reliable and trusted digital single market.

 

ENSURESEC WP8 - Deployments and Evaluation

Estimated Test Duration:

1 hour

Target beta testers profile:

Business users

Beta tester level:

Beginner

Campaign objectives

For ENSURESEC software evaluation:

  • Technical partners should first carry out the unit and integration testing
  • End-users and expert domains will then be engaged for the evaluation
  • Software evaluation verifies whether a product or software is fit for the purpose it was built for, namely:
    • Fulfills business requirements
    • Can be used by end-users
    • The basic principles of the ISO/IEC 25010:2011 process will be employed
    • All of the infrastructure needed—software and hardware—should be in place

Requirements for this campaign

Participants of the software evaluation survey must:

  • Belong to an ENSURESEC consortium end-user partner
  • Be engaged to at least one pilot scenario

Tool providers of ENSURESEC systems must:

  • Complete the development of their tools
  • Integrate their tools in the ENSURESEC system
  • Provide access to the end-users to their tools

Beta test instructions and scenario

Participants of the survey must have before answering the questionnaires must:

  • Complete at least one pilot scenario
  • Have clear instructions about the usage of the software as well as the goals of each scenario
  • Answer all the questions of the survey which is divided in three main sections:
    • General section concerning participant's information
    • Evaluation of the pilot as a whole
    • Evaluation of each application included in the pilot

Campaign Mailing List

▲ Back

STAMP_Logo_RGB_small.svg

STAMP

Software Testing AMPlification for the DevOps Team

▼ Click for campaign details and rewards

STAMP stands for Software Testing AMPlification. Leveraging advanced research in automatic test generation, STAMP aims at pushing automation in DevOps one step further through innovative methods of test amplification. 

STAMP reuses existing assets (test cases, API descriptions, dependency models), in order to generate more test cases and test configurations each time the application is updated. Acting at all steps of development cycle, STAMP techniques aim at reducing the number and cost of regression bugs at unit level, configuration level and production stage.

STAMP raises confidence and foster adoption of DevOps by the European IT industry. The project gathers four academic partners with strong software testing expertise, five software companies (in: e-Health, Content Management, Smart Cities and Public Administration), and an open source consortium. This industry-near research addresses concrete, business-oriented objectives.

 

Try the STAMP toolset

Estimated Test Duration:

2 hours

Incentives

You'll have nothing to lose and everything to win, including time and quality in your software releases!
Moreover, you'll be among the first to experiment the most advanced Java software testing tools.

And, as a recognition for your efforts and useful feedback, you will receive a limited edition “STAMP Software Test Pilot” gift and be added as a STAMP contributor. This offer is limited to the beta testers interacting with the team, by 30 October 2019. You will be contacted individually for a customized gift and for contribution opportunities. Please, provide a valid contact email.

Target beta testers profile:

Developers

Beta tester level:

Beginner

Campaign objectives

Trying the open source toolset is a free initiative that will amplify your testing efforts automatically. Experiment DSpot, Descartes, CAMP or Botsing now.

Requirements for this campaign

Download and try DSpot or Descartes or CAMP or Botsing.

Beta test instructions and scenario

Campaign Mailing List

▲ Back

logo_wide.png

Energyshield - Security Culture Assessment tool

EnergyShield is a complete state-of-the-art security toolkit for the EPES sector

▼ Click for campaign details and rewards

EnergyShield captures the needs of Electrical Power and Energy System (EPES) operators and combines the latest technologies for vulnerability assessment, supervision and protection to draft a defensive toolkit.Adapt and improve available building tools (assessment, monitoring & protection, remediation) in order to support the needs of the EPES sector.Integrate the improved cybersecurity tools in a holistic solution with assessment, monitoring/protection and learning/sharing capabilities that work synergistically.Validate the practical value of the EnergyShield toolkit in demonstrations involving EPES stakeholders.Develop best practices, guidelines and methodologies supporting the deployment of the solution and encourage widespread adoption of the project results in the EPSE sector.

 

Energyshield SBAM Tool

Estimated Test Duration:

between 20 minutes to 30

Incentives

Beta testers will be acknowledged within our website

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner

Campaign objectives

Energyshield has created a first version of the security culture assessment tool. We would like to beta test this first version

Requirements for this campaign

No requirements except internet connection and browser - all browser types and devices are acceptable

Beta test instructions and scenario

For the beta testing campaign - create a user group in the tool, create a campaign, answer a questionnaire and review the results of the assessment. The URL of the website is (http://energyshield.epu.ntua.gr/) . Information and guide of the platform is included here: (https://1drv.ms/w/s!Avx-hU-EvNxviEse2KU6hPqEoY4O?e=Hn5byP)

Campaign Mailing List

▲ Back

TRUSTS-Logo.png

TRUSTS

Trusted Secure Data Sharing Space

▼ Click for campaign details and rewards

TRUSTS will ensure trust in the concept of data markets as a whole via its focus on developing a platform based on the experience of two large national projects, while allowing the integration and adoption of future platforms by means of interoperability. The TRUSTS platform will act independently and as a platform federator, while investigating the legal and ethical aspects that apply on the entire data valorization chain, from data providers to consumers, i.e., it will:

- set up a fully operational and GDPR-compliant European Data Marketplace for personal related and non-personal related data targeting individual and industrial use by leveraging existing data marketplaces (Industrial Data Space, Data Market Austria) and enriching them with new functionalities and services to scale out.

- demonstrate and realise the potential of the TRUSTS Platform in 3 use cases targeting the industry sectors of  corporate  business  data  in  the  financial  and  operator  industries  while  ensuring  it  is supported by a viable, compliant and impactful governance, legal and business model.

 

TRUSTS requirements elicitation

Estimated Test Duration:

20 mins

Target beta testers profile:

Business users, Developers

Beta tester level:

Intermediate, Advanced

Campaign objectives

The TRUSTS consortium aims at receiving responses to the requirements elicitation questionnaire and interviewing industrial, academia and regulatory domain experts in order to lead the TRUST data marketplace specification. Your responses will help us to evaluate the functionality, services and operational capacity of such an endeavour and to establish its operation.

Requirements for this campaign

In this questionnaire you will we asked about the data sharing processes in your organization, therefore it aims at people that are having the need to exchange or trade data in your organization.

Beta test instructions and scenario

Just follow the link to the questionnaire.

Campaign Mailing List

▲ Back

safe-deed-logo-color.png

Safe-DEED

A competitive Europe where individuals and companies are fully aware of the value of the data they possess and can feel safe to use it.

▼ Click for campaign details and rewards

Safe-DEED (Safe Data-Enabled Economic Development) brings together partners from cryptography, data science, business innovation, and legal domain to focus on improving security technologies, improving trust as well as on the diffusion of privacy enhancing technologies. Furthermore, as many companies have no data valuation process in place, Safe-DEED provides a set of tools to facilitate the assessment of data value, thus incentivizing data owners to make use of the scalable cryptographic protocols developed in Safe-DEED to create value for their companies and their clients.

Project website:

 

Personal Data Demonstrator

Estimated Test Duration:

1 hour

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users

Beta tester level:

Beginner, Intermediate, Advanced

Campaign objectives

The Safe-DEED project would like to evaluate the completeness of the proposed demonstrator in terms of business application, value and roles.

Requirements for this campaign

You can access the demonstrator using any Web browser.

Beta test instructions and scenario

Please follow the https://demo.safe-deed.eu/ link and evaluate all subordinate applications. Instructions can be found in embedded videos in the main page of the demonstrator as well as in the applications' pages. Additional explanations are also provided when appropriate.

Campaign Mailing List

▲ Back

2871.png

Carsharing Use Case

Car-sharing is a form of person-to-person or collaborative consumption, whereby existing owners rent their cars to other people for short period of time.

▼ Click for campaign details and rewards

Car-sharing is a form of person-to-person or collaborative consumption, whereby existing owners rent their cars to other people for short period of time. Essentially, this use case provides a collaborative business model as an alternate to private car ownership allowing customers to use a vehicle temporarily on-demand at a variable fee depending on the distance travelled or usage

Project website:

 

Beta-tester Passengers

Estimated Test Duration:

30-45 minutes

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users

Beta tester level:

Intermediate

Campaign objectives

The objective of this campaign is to adapt the use case to the market which Agilia Center is aiming to cope with, finding insights that can be transformed into functionalities which can be integrated into the development phase.
After this step, we will include these prerequisites in the roadmap of the service (Service Backlog) for the acceptance tests that will be carried out at the completion of the development stage. The process to test the requirements will encompass a methodology to not only test the previous features but also to extract new information.

Requirements for this campaign

  • Android Device (Android Pie 9)
  • Allow app to installs from Unknown Sources in Android
  • Internet Connection
  • Turn on the location on the phone or use a Fake GPS Application such as Fake GPS location or Fake GPS Free

Beta test instructions and scenario

Introduction

From now, you are going to act as a passenger, which is a person who wants to share a car with other people (at least with a driver) for a short period of time from one point to another.

Instructions

Instructions will be provided within the survey. Please, go to the survey

Campaign Mailing List

 

Beta-tester Drivers

Estimated Test Duration:

30-45 minutes

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users

Beta tester level:

Intermediate

Campaign objectives

The objective of this campaignis to adapt the use case to the market which Agilia Center is aiming to cope with, finding insights that can be transformed into functionalities which can be integrated into the development phase.
After this step, we will include these prerequisites in the roadmap of the service (Service Backlog) for the acceptance tests that will be carried out at the completion of the development stage. The process to test the requirements will encompass a methodology to not only test the previous features but also to extract new information.

Requirements for this campaign

  • Android Device (Android Pie 9)
  • Allow app to installs from Unknown Sources in Android
  • Internet Connection
  • Turn on the location or use a Fake GPS Application such as Fake GPS location or Fake GPS Free

Beta test instructions and scenario

Introduction

From now, you are going to act as a driver, which is a person who wants to rent a car and drive it for a short period of time from one point to another.

Instructions

Instructions will be provided within the survey. Please, go to the survey

Campaign Mailing List

 

Beta-tester Vehicle Owner

Estimated Test Duration:

1-2 hours

Reward for this campaign:  

30€

Target beta testers profile:

Business users

Beta tester level:

Intermediate

Campaign objectives

The objective of this campaignis to adapt the use case to the market which Agilia Center is aiming to cope with, finding insights that can be transformed into functionalities which can be integrated into the development phase.
Specifically, the end user we want to target with this campaign is the vehicle owner who can rent out their vehicles getting money from them. The car sharing/carpooling platform allows these user tu submit their vehicle setting ut the price and the escrow for its use.

Requirements for this campaign

  • Android Device (Android Pie 9)
  • Allow app to installs from Unknown Sources in Android
  • Internet Connection
  • Turn on the location or use a Fake GPS Application such as Fake GPS location or Fake GPS Free

Beta test instructions and scenario

Introduction

From now, you are going to act as a individual owner, which is a person who has a vehicle and wants share costs allowing other users to rent it for a short period of time.

Download and Install the Mobile App

  1. Download .apk from https://drive.google.com/file/d/1xXMWmUq4D5UzTbzt83fazpkWSkeeGQ9R/view?usp=sharing.
  2. Install the Android App.
  3. Run the Carsharing App.
  4. Allow Carsharing to access location: Press Allow all the time.
  5. Allow Carsharing to access photos, media and files: Press Allow.

Sign up and Login

Now you are going to create your credentials. Since the Agilia solution is based on a permissioned Blockchain network, the credentials are couple of certificates. These certificates are encrypted and stored in your mobile using a password.

  1. Press Create Certificate.
  2. Fill a Name and a Password.
  3. Press Create
  4. Logout:
    1. Open the burger menu.
    2. Press Exit.
  5. Import certificates of a created user:
    1. Press Import Certificate.
    2. Find the Certificate file in /Android/data/com.carsharing/files.
    3. Fill the Password that you have entered previously in order to decrypt your certificates.
    4. Press Submit.

Create a new Vehicle

You need to register your vehicle in the application to allow other users to rent it. 

  1. Navigate to the Vehicles Screen (Second option of the burger menu).
  2. Create a new vehicle:
    1. Press the + button.
    2. Fill, at least, the required fields: License Plate, Brand, Model, Colour, Seats, Year, Vehicle State. If you fill the vehicle state as BAD, your vehicle can not be rented.
    3. Press the Save button.
  3. Now, in the vehicle list, you can see the car that you have created.

Create an Offer

You need to create an offer in order show to other users that your car is available to be rented.

  1. Navigate to the My Offers Screen (Third option of the burger menu).
  2. Create a new offer:
    1. Press the + button.
    2. Fill, at least, the required fields: License Plate, Price for KM, Price For Time, Start Date, End Date, Escrow and Start Place.
    3. Press the Save button.
  3. Now, in the My offers list, you can see the offer that you have created.

Watch trips related to your vehicle

You can see the trips associated to your vehicles after, at least, one driver reserve a trip with your car.

  1. Navigate to the Vehicles Screen (Second option of the burger menu).
  2. Find the vehicle that you want to inspect (you can use the filters).
  3. Press the vehicle.
  4. Press the ... button (blue button) and the eye option (green color).
  5. This screen shows the trips related to your selected vehicle.
  6. Press a the trip that you want to see the details.

Withdraw credit (CSCoins)

After at least one trip finished you can see that your CSCoins has increased. Then you can withdraw your CSCoins. 1 CSCoins equals 1 Euro

  1. Press the CSCoins Button (In the header bar in the top right-hand).
  2. Press Withdraw CSCoins.
  3. Fill the email of your Paypal Sandbox account. Please, select one of the following accounts:
    1. sb-ejr771011751@personal.example.com
    2. sb-fxu4391011730@personal.example.com.
  4. Fill the amount.
  5. Press Withdraw.
  6. When the Paypal workflow is finished, press OK.
  7. The transaction can take some minutes. Please, refresh the screen (pull-to-refres gesture).

Campaign Mailing List

▲ Back

Reachout_web.svg

ReachOut

Beta-testing campaigns for research projects

▼ Click for campaign details and rewards

ReachOut is a Coordination and Support Action (CSA) helping H2020 projects in the area of software technologies to implement beta-testing campaigns. ReachOut act as an operational intermediary between research projects and the open market. ReachOut helps research projects implement beta testing best practices and recruit beta-testers by running promotion initiatives. ReachOut helps develop connections between research projects and potential users and beta-testers.

 

Testing the ReachOut platform

Estimated Test Duration:

30 minutes to 1 hour

Reward for this campaign:  

30€

Incentives

By participating in this survey, you will help the ReachOut project provide a better service to research projects.
Upon your consent, you will be added to the ReachOut Hall of Fame.

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner, Intermediate, Advanced

Campaign objectives

The goal of this campaign is to test the ReachOut platform.
It targets H2020 projects who would like to setup a beta-testing campaign.
You can find out more about ReachOut and the methodology on this page

Requirements for this campaign

In order to start testing the ReachOut platform, you need to have ready:

  • the name, short and long description of your project
  • a logo file for your project
  • the objectives of the beta-testing campaign
  • the estimated duration it will take a beta tester to test your beta-version
  • a beta version of your software available to download
  • requirements for beta testers to test your software (list of pre-installed software, hardware requirements, operating systems constraints, ...)
  • a comprehensive test scenario and instructions
  • (optional) incentives for beta testers to participate in the campaign

Beta test instructions and scenario

In order to test the ReachOut platform, you will need to:

  1. Visit https://reachout-project.eu
  2. Register as a campaign manager
    You will need to provide your details, e-mail address, login and password.
    You will receive a message to your e-mail address with an activation link.
  3. Create your project
    You will have to fill in the project details.
  4. Create your campaign with the appropriate campaign details.
    Note that you can use the XWiki syntax for formatting the details (links, bullets, ...)
  5. Customize the questionnaire in LimeSurvey
    You can do this by clicking on the "Customize and activate the associated questionnaire" button. Log into LimeSurvey using your ReachOut login and password provided during the registration.
    Once in LimeSurvey, you can edit the questions.
  6. Activate the questionnaire in LimeSurvey
  7. Manage the progress of your campaign using the campaign dashboard
    To do this, go back to your home page on the ReachOut website, and click on the Dashboard button below your campaign details. Then, edit the dashboard, save.
  8. Fill in the questionnaire (as a beta tester)
    To do this, log out of LimeSurvey and ReachOut, go to the ReachOut website, click on "Checkout existing campaigns" and fill in the questionnaire.
  9. View the answers on LimeSurvey
    Log into LimeSurvey and go to your campaign on LimeSurvey.
    Then, click on Statistics in the left menu, then on the "Simple mode" button top left. You can view statistics about the answers that have been provided by beta testers.

Campaign Mailing List

▲ Back

Asset 9.png

Parsec

Simply collaborate with complete confidentiality and integrity in the cloud
With innovative security "anti-ransomware available today in an intuitive solution for sharing sensitive data.

▼ Click for campaign details and rewards

Parsec is the secure collaborative solution that provides confidential data sharing and storage in the cloud, whether public or private.
In order to improve the user experience of the solution, we are setting up this series of tests on Reachout that will allow us to evaluate the following points:

- File management features
- Administrative and user management features
- Ergonomics
- Usability
- Interface design

Parsec is available in PC version on Windows, Mac and Linux; and also in Android version soon available for the public.
Our Parsec solution is certified by the ANSSI 

Parsec est la solution collaborative sécurisée qui assure le partage et le stockage des données en toute confidentialité dans le cloud, qu’il soit public ou privé.
Afin d’améliorer l’expérience utilisateur de la solution, nous mettons en place cette série de tests sur Reachout qui nous permettra d’évaluer les points suivants :

• Fonctionnalités gestion des fichiers
• Fonctionnalités de gestions administratives et des utilisateurs
• Ergonomie
• Utilisabilité
• Design interface

Parsec est disponible en version PC sur Windows, Mac et Linux ; et également en version Android bientôt disponible pour le public.
Notre solution Parsec est certifiée par l'ANSSI (Agence nationale de la sécurité des systèmes d'information)

Project website:

 

Parsec PC

Estimated Test Duration:

30min

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users

Beta tester level:

Beginner, Intermediate, Advanced

Campaign objectives

In order to improve the user experience of the solution, we are implementing this series of tests on ReachOut which will allow us to evaluate the following points:

  • File management features
  • Administrative and user management functions
  • Ergonomics
  • Usability
  • Interface design

Afin d’améliorer l’expérience utilisateur de la solution, nous mettons en place cette série de tests sur Reachout qui nous permettra d’évaluer les points suivants :

  • Fonctionnalités gestion des fichiers
  • Fonctionnalités de gestions administratives et des utilisateurs
  • Ergonomie
  • Utilisabilité
  • Design interface

Requirements for this campaign

Parsec is available in desktop version for Windows, Mac and Linux; and also in Android version soon available for the public.
This first test will focus on the Parsec desktop version v2.3.1 for Windows, Mac or Linux, downloadable from the Parsec website - Get Parsec.

Parsec vocabulary is specific to the software, all test participants should read the vocabulary in the Parsec User Guide.

You can also watch the video explaining how Parsec works.

For part of the scenario, you will need to invite a second user. You can either use another e-mail address, but you will need a second computer, or invite someone else you know.

Parsec est disponible en version PC pour Windows, Mac et Linux ; et aussi en version Android bientôt disponible pour le public.
Ce premier test portera sur la version PC de Parsec v2.3.1 pour Windows, Mac ou Linux, téléchargeable sur le site Parsec website - Get Parsec.

Le vocabulaire de Parsec est spécifique au logiciel, tous les participants au test doivent lire le vocabulaire dans le Guide de l'utilisateur de Parsec.

Vous pouvez également regarder la vidéo expliquant le fonctionnement de Parsec.

Pour une partie du scénario, vous devrez inviter un deuxième utilisateur. Vous pouvez soit utiliser une autre adresse e-mail, mais vous aurez besoin d'un deuxième ordinateur, soit inviter une autre personne de votre connaissance.

Beta test instructions and scenario

STEP 1: Installation and creation of the working environment

  1. Download Parsec on the Parsec website - Get Parsec.
  2. Install the software on your browser following the instructions received by email. 
  3. Open Parsec on your browser.
  4. Create your organization, your workspaces.

STEP 2 : Invite a new user in your organization

  1. Invite a new user by referring to the UG (user guide). 
  2. Share a workspace with the user.
  3. Test the collaborative file synchronization with your guest by modifying the content of a file. Then check if all the modifications made in the file are taken into account. 
  4. Test the history function on the same file. (see UG) 

STEP 3 : Your workspace and its features

  1. Import files into your workspaces from the Parsec software interface.
  2. Import files into the shared Parsec directory using your PC file explorer. 
  3. Modify a file, save and exit.
  4. Test the different workspace features:
  • Going back in time
  • Sharing files
  • Renaming a file

Scénario du test

ÉTAPE 1 : Installation et création de l’environnement de travail

  1. Télécharger Parsec sur Get Parsec - Parsec 
  2. Installer le logiciel selon votre navigateur en suivant les instructions reçues par mail. 
  3. Ouvrir Parsec sur votre navigateur
  4. Créer votre organisation, vos espaces de travail

ÉTAPE 2 : Inviter un nouvel utilisateur dans votre organisation

  1. Inviter un nouvel utilisateur en vous référant au GU (guide d'utilisation). 
  2. Partager un espace de travail avec l’utilisateur.
  3. Tester la synchronisation collaborative des fichiers avec votre invité en modifiant le contenu d’un fichier. Vérifiez ensuite la prise en compte de toutes les modifications effectuées dans le fichier. 
  4. Testez la fonction historique sur même fichier. (voir GU) 

ÉTAPE 3 : Votre espace de travail et ses fonctionnalités

  1. Importer des fichiers dans vos espaces de travail depuis l’interface du logiciel
  2. Importer des fichiers dans Parsec depuis votre explorateur de fichier PC   
  3. Modifier un fichier, enregistrer et quitter 
  4. Tester les différentes fonctionnalités de l’espace de travail 
  • Remonter dans le temps 
  • Partager des fichiers 
  • Renommer des fichiers 

Campaign Mailing List

▲ Back

Zql

Zql

Java SQL parser

▼ Click for campaign details and rewards

Zql is a java SQL parser, generated using JavaCC. It parses SQL constructs (no DDL) and generates a parse tree, accessible through a java API.

 

Zql beta-test

Estimated Test Duration:

10 minutes

Target beta testers profile:

Business users

Beta tester level:

Beginner, Intermediate, Advanced

Campaign objectives

Build and run unit tests.

Requirements for this campaign

Java 5 or above, maven.

Beta test instructions and scenario

- Checkout project using git:

git clone https://github.com/gibello/zql.git

- Build project, using maven:

cd zql/
mvn clean install

▲ Back

logo_morphemic-300x241.png

MORPHEMIC

MORPHEMIC project proposes an unique way of adapting and optimizing Cloud computing applications.

▼ Click for campaign details and rewards

MORPHEMIC model adaptation is with the extension of the MELODIC project www.melodic.cloud in order to support live application reconfiguration. The former is when a component can run in different technical forms, i.e. in a Virtual Machine (VM), in a container, as a big data job, or as serverless components, etc. The technical form of deployment is chosen during the optimization process to fulfil the user’s requirements and needs. The quality of the deployment is measured by a user defined and application specific utility. Depending on the application’s requirements and its current workload, its components could be deployed in various forms in different environments to maximize the utility of the application deployment and the satisfaction of the user. Proactive adaptation is not only based on the current execution context and conditions but aims to forecast future resource needs and possible deployment configurations. This ensures that adaptation can be done effectively and seamlessly for the users of the application. 

The MORPHEMIC deployment platform will therefore be very beneficial for heterogeneous deployment in distributed environments combining various Cloud levels including Cloud data centres, edge Clouds, 5G base stations, and fog devices. Advanced forecasting methods, including the ES-Hybrid method recently winning the M4 forecasting competition, will be used to achieve the most accurate predictions. The outcome of the project will be implemented in the form of the complete solution, starting from modelling, through profiling, optimization, runtime reconfiguration and monitoring. Then the MORPHEMIC implementation will be integrated as a pre-processor for the existing MELODIC platform extending its deployment and adaptation capabilities beyond the multicloud and cross-cloud to the edge, 5G, and fog. This approach allows for a path to early demonstrations and commercial exploitation of the project results.

 

Modelio CAMEL Designer

Estimated Test Duration:

30 mins to 1 hour

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Developers

Beta tester level:

Beginner, Intermediate

Campaign objectives

As Modelio CAMEL Designer is going to be released in the next months we need to make sure that all of its functions work properly.

Requirements for this campaign

In order to install and use our Modelio CAMEL Designer you need to have Java SE version 8 and Modelio 4.1 (see below for setting up Modelio) installed in your system. 

Beta test instructions and scenario

1. Setting up Modelio CAMEL designer

2. Create Camel Model: Steps to complete

  • Create an empty package by right-clicking your root UML project or any other Package -> Create Element -> Package
  • Right-click an empty package to show available commands and click on Camel Designer -> Create element -> Create Camel Model 

Expected results:

An empty Camel Model is created.

3. Create metric type model: Steps to complete

  • Right-click a CAMEL Model to display the list of available commands
  • Click on Camel Designer -> Create element -> Metric_Model 

Expected results:

A metric type model is created inside the CAMEL model.

4. Create software component: Steps to complete

  • Create a Deployment Model: right-click on the CAMEL model -> Camel Designer -> Create element -> Deployment Model
  • Create a Deployment Diagram: right-click the deployment model -> Camel Designer -> Create Diagram -> Deployment model Diagram
  • Open the deployment model diagram
  • In the palette, in the Deployment Type box, select the Software Component icon, then draw a rectangle inside the deployment model diagram to create a Software Component 

Expected results:

A software component is created and displayed in the diagram.

Campaign Mailing List

▲ Back

Olympus-logos-til-site_large.png

OLYMPUS

OLYMPUS is a framework for distributed identity management, improving security and user privacy when compared to traditional identity management solutions.

▼ Click for campaign details and rewards

ObLivious identitY Management for Private and User-friendly Services (OLYMPUS) is a EU research project focused on identity management. The project has resulted in a framework that, utilizing distributed authentication and signing, improves security of traditional identity services. Furthermore, a goal of the project is to improve user privacy, by introducing various techniques for anonymity and unlinkability. 

The framework contains 3 main building blocks, a protocol for Oblivious Pseudo Random Functions, a protocol for distributed signatures and a scheme for distributed privacy attribute based credentials (dp-ABCs).

 

OLYMPUS framework demonstrator

Estimated Test Duration:

1-2 hours

Reward for this campaign:  

60€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 12 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Developers

Beta tester level:

Intermediate

Campaign objectives

The OLYMPUS project has released a demonstrator of the frameworks capabilities. The framework's overall objective is the improve security and user privacy in identity management solutions, by introducing a distributed IdP. The demonstrator shows how the framework can be integrated in existing solutions, based on OpenID connect and W3C's Verifiable Credentials. 

As this is a somewhat advanced solution compared to traditional IdP solutions (and the internal development team is quite experienced in cryptography), we therefore seek "normal" developer's feedback on the usability and understandability of the framework, i.e., whether the framework can be used by "normal" developers. This includes developement of service providers, user applications and identity providers.

Requirements for this campaign

In order to build and deploy the various servers, the following software must be installed:

  • Java 1.8 (with crypto-extensions)
  • Maven
  • Docker
  • NodeJs

The demonstrator use case is split into two flows, the first (OIDC) requires a browser, whereas the second (W3C) part, requires an Android based smartphone.

Beta test instructions and scenario

Start by cloning the repository found at https://bitbucket.alexandra.dk/projects/OL/repos/usecase-3

Details for the test scenario and a Step-by-Step guide can be found in the "documentation" sub-folder, which can be broken into 3 main tasks:

  • Building the codebase.
  • Deploying a number of services.
  • Running through a simple use case.

Campaign Mailing List

▲ Back

cropped-melodic-logo-1.png

MELODIC

Open source multicloud management platform with allows for optimization and automation of the deployment to the different Cloud Providers (AWS, Azure, GCP, Open Stack based).

▼ Click for campaign details and rewards

MELODIC is cloud agnostic and optimized way to multicloud. It is multi-cloud management platform created within H2020 project with the same name. The MELODIC platform will enable and optimize data-intensive applications to run within defined security, cost, and performance boundaries seamlessly on geographically distributed and federated cloud infrastructures. Applications are modelled and deployed in cloud-agnostic manner. Optimization is done continuously using Reinforcement Learning algorithms. MELODIC is fully open-source, licensed under MPL.

 

MELODIC - multicloud management platform

Estimated Test Duration:

16 hours

Incentives

Melodic badge and certificate. For the first or most active beta testers, we will provide project goodies (mugs, ...).

Target beta testers profile:

Developers

Beta tester level:

Intermediate

Campaign objectives

By becoming a beta tester of Melodic you will be able to learn how to use model@runtime with automatic adaptation and optimization of deployment to multicloud. Melodic is an open source multicloud management platform which allows for optimization and automation of the deployment to the different Cloud Providers (AWS and Open Stack based).

Requirements for this campaign

  • Basic knowledge about Cloud Computing. 
  • Access to at least one Cloud Provider.

Beta test instructions and scenario

  1. Install Melodic on your machine as described on the Melodic download page (scenario1)
  2. Deploy simple two component application. scenario2.pdf
  3. Install the Eclipse oxygen-based Camel editor, which enable you to create your model, manual available on Melodic's website (scenario3)
  4. Model and deploy your own application using Melodic platform. scenario4.pdf

▲ Back

Schermata 2021-02-19 alle 10.45.19.png

DataVaults

DataVaults aims to deliver a framework and a platform that has personal data, coming from diverse sources in its centre.

▼ Click for campaign details and rewards

A strong data economy is emerging in Europe, where both large companies and SMEs acknowledge the fundamental value of Big Data to cause disruptive change in markets and business models.
Nevertheless, the growth of the data economy is hampered by the lack of trusted, secure and ethical-driven personal data platforms and privacy-aware analytics methods capable of, on the one hand, securing the sharing of personal data and proprietary/commercial/industrial data and, on the other hand, of strictly and fairly defining how value can be captured, produced, released and cashed out for the benefit of all the stakeholders involved.
Addressing this kind of concerns on privacy, ethics and IPR ownership over the DataVaults value chain is one of the cornerstones of the project. Its goal to set, sustain and mobilize an ever-growing ecosystem for personal data and insights sharing and for enhanced collaboration between stakeholders (data owners and data seekers) relies exactly on DataVaults personal data platform’s extra functionalities and methods for retaining data ownership, safeguarding security and privacy, notifying individuals of their risk exposure, as well as on securing value flow based on smart
contract.
DataVaults aims to deliver a framework and a platform that has personal data, coming from diverse sources in its centre and that defines secure, trusted and privacy preserving mechanisms allowing individuals to take ownership and control of their data and share them at will, through flexible data sharing and fair compensation schemes with other entities (companies or not). The overall approach will rejuvenate the personal data value chain, which could from now on be seen as a multi-sided and multi-tier ecosystem governed and regulated by smart contracts which safeguard personal data ownership, privacy and usage and attributes value to the ones who produce it.

 

Survey on citizens perspective on Personal Data Sharing

Estimated Test Duration:

15 minutes

Incentives

Not Applicable

Target beta testers profile:

Business users

Beta tester level:

Beginner

Campaign objectives

the DataVaults Project is trying to create a new personal data “platform” where people can safely upload their personal data and keep it encrypted and secure there. To help us in this work, we want to find out how you feel about this widespread sharing of your data. So this survey is aimed at exploring citizens’ perspective, expectations, needs and concerns about personal data sharing.

Requirements for this campaign

The questionnaires is directed to citizens in their role of possible potential users of the DataVaults platform and App.

Beta test instructions and scenario

We prepared a questionnaire composed of 30 questions.

Campaign Mailing List

▲ Back

decide-logo.jpg

DECIDE

Multicloud Applications Towards the Digital Single Market

▼ Click for campaign details and rewards

DECIDE is a new generation of multi-cloud service-based software framework, providing mechanisms to design, develop, and dynamically deploy multi-cloud aware applications in an ecosystem of reliable, interoperable, and legal compliant cloud services.
DECIDE is composed of a set of tools that cover the entire DevOps pipeline, from design and development to deployment and operations. All the tools are integrated via the DevOps framework UI that provides a unified user interface and orchestrates their execution when necessary.

 

DECIDE Platform

Estimated Test Duration:

3 hours

Incentives

ReachOut goodies (ask for it!)

Target beta testers profile:

Developers

Beta tester level:

Intermediate

Campaign objectives

By becoming a beta tester of DECIDE you will be able to experience the full DevOps lifecycle of a multi-cloud application via the unified DECIDE DevOps framework UI.
Requirements for this campaign
• Intermediate knowledge on Cloud Computing.  
• Advanced knowledge on DevOps.

Requirements for this campaign

• Intermediate knowledge on Cloud Computing.  
• Advanced knowledge on DevOps.

Beta test instructions and scenario

Install and configure the individual services (check the “Delivery and Usage” sections on each document)
ARCHITECT cloud patterns service
OPTIMUS simulation service
MCSLA service
ACSmI discovery/contracting/monitoring services
ADAPT deployment orchestrator
ADAPT Violation handler
Install and configure DECIDE DevOps framework (check the “Delivery and Usage” section)
DECIDE DevOps framework
Connect to the DevOps framework web interface and follow the workflow to create, deploy and manage a multi-cloud application (check the “User Manual” sections on the aforementioned documents)

Campaign Mailing List

▲ Back

GeoTriples_Spark

GeoTriples-Spark

Publishing geospatial data as Linked Open Geospatial Data. GeoTriples generates and processes extended R2RML and RML mappings that transform geospatial data from many input formats into RDF.

▼ Click for campaign details and rewards

Publishing geospatial data as Linked Open Geospatial Data. GeoTriples generates and processes extended R2RML and RML mappings that transform geospatial data from many input formats into RDF.

 

GeoTriples-Spark

Estimated Test Duration:

20 minutes

Target beta testers profile:

Developers

Beta tester level:

Beginner, Intermediate

Campaign objectives

We would like to see if all RML functions can run with no Exceptions

Requirements for this campaign

To be able to execute the project, you will need Java 1.8 (8), maven 3 (or greater) and Spark 2.4.0

To build the code, clone repository https://github.com/LinkedEOData/GeoTriples and build by executing
"mvn package"

You can run experiments using the data in https://drive.google.com/file/d/1CZSjgCsRI4-vK82CR35po8Mix5rCjK7y/view?usp=sharing

You can find more information in the repository

Beta test instructions and scenario

To run a simple experiment run:
spark-submit  master local[*]  class eu.linkedeodata.geotriples.GeoTriplesCMD /path_to/geotriples-spark.jar spark -i /path_to/greece-natural-a/gis_osm_natural_free_1.shp -o /path_to/folder_to_store_results  /path_to/greece-natural-a/gis_osm_natural_free_1.ttl

We would like to check if the RML processor handles the RML term maps as expected (see https://rml.io/specs/rml/#term-map). 

We want the user to be able to provide any term map (mostly constant- or template-valued) and to get the requested results. The term maps are defined by editing the .ttl file (the rr:objectMap fields, there are some examples in the document).
So we would like to check if the program can handle any given term map. So you can try and execute the project with different term maps (mostly template-valued) to see how it handles them.

Campaign Mailing List

▲ Back

elastest.png

Elastest

ElasTest Platform is being developed within a public founded project called ElasTest: an elastic platform for testing complex distributed large software systems.

▼ Click for campaign details and rewards

ElasTest Platform is being developed within a public founded project called "ElasTest: an elastic platform for testing complex distributed large software systems". Elastest started on January 1, 2017 and will finish on December 31, 2019.

Project website:

 

Try ElasTest Platform

Estimated Test Duration:

4 hours

Target beta testers profile:

Developers

Beta tester level:

Beginner, Intermediate

Campaign objectives

In this campaign, you will discover a testing scenario using ElasTest. A test being executed in ElasTest can make direct use of multiple integrated services (such as Web Browsers), and the tester can see all that monitoring information in the same graphical user interface and with advanced analysis features.

Requirements for this campaign

ElasTest can run in differents platforms like Laptop, Linux VM and Server. For more information about the requirements to launch ElasTest, please visit: https://elastest.io/docs/tutorials/getting-started/

Beta test instructions and scenario

The detailed instructions to execute the beta test are available at: https://elastest.io/docs/try-elastest/

▲ Back

INFORE-logo.svg

MSA UI

The Maritime Situational Awareness (MSA) application is a web-based platform providing end-users tools for monitoring events, illegal activities and possible threats in the maritime environment.

▼ Click for campaign details and rewards

The Maritime Situational Awareness (MSA) application is a platform providing end-users and decision-making experts, tools and means via an intuitive User Interface (UI) for monitoring and forecasting events, illegal activities and possible threats in the maritime environment.
It has been developed by the EU project http://www.infore-project.eu/.

The back-end of the application relies on advanced big data and AI techniques for information producing synopses of maritime data to improve scalability over large streams of data, (ii) detecting simple and complex maritime events, (iii) forecasting maritime events.  

The aforementioned components are the building blocks of automatic, sophisticated data science workflows that can be designed and executed using RapidMiner Studio. The results of the maritime workflows (i.e., maritime events) of the MSA application are available as Kafka topics and they are displayed to end-users (e.g, mariners, coastguard authorities, VTS officers, etc.) via an interactive web interface. 

The UI of this application is a real-time interactive map where all output data from INFORE models coming as streamlines from Kafka topics are rendered accordingly in the MSA UI. 

The end-users are able to monitor the area of their interest in a real-time manner and inspect all the crucial parameters related with MSA such as:
- Displaying the Latest (current) position of vessels
- Visualization of Simple & Complex Events (proximities between vessels, illegal fishing activities etc.)
- Dynamic visualization of vessels past tracks (trajectories) and past events occurred

 

MSA UI

Estimated Test Duration:

15 minutes

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner

Campaign objectives

The MSA UI application aims to provide end-users with useful tools and means via an intuitive user interface to monitor and forecast events, illegal activities and possible threats in maritime environment. The objective of this use case is to provide us with feedback about how feasible and easy it is to monitor crucial aspects of maritime environment such as latest vessels position, occurred events such as proximities between two ships, "in areas" events when a vessel cross specified areas of interest, illegal fishing and more. 

In MSA UI the vessels are visualized with rectangular blue markers while the events are visualized with circle markers accompanied with a small pulse animation; if its a simple event, or with a greater pulse if its a complex event. Below we describe the events that may be occurred:

- Proximity Event: When two ships are close enough to each other
- In Areas Event: When a vessel enters a specified area of interest (e.g. anchorage area)
- AIS Off Event: When the vessel's AIS device is turned off
- Fishing (complex) Event: When a vessel is about to be engaged into illegal fishing activities.

Requirements for this campaign

In order to access the MSA User Interface, after visiting the application's page at https://msa.marinetraffic.com/, you have to fill in the Log in form with these user credentials :

- username: msa.demo.1@marinetraffic.com
- password: 93104276

After the successful log in, the user is able to pan around the map and use the corresponding tools to navigate through the application's UI.

Beta test instructions and scenario

For this use case, we consider an office agent that works at Piraeus Port, in Athens, Greece and needs to have an overview of the vessels position and status located near the port, as well as track any occurred illegal fishing activities in the open sea of Saronic Gulf.

After the successful log in into the MSA UI follow the guidelines described below:

  1. Pan around the map and inspect the relevant sea area to have an overview of the vessels location as well as the events occurring.

2. Click on any vessel or event marker on the map to get more information for that particular marker. Try investigating the vessels destination, speed and vessel type. As concerns the events markers, for some types of events (e.g. proximity & in-areas events) after clicking on the marker you can see that extra geometries are displayed as overlays connecting the vessels that were engaged in a proximity event or the geometry of the area that a vessel entered in that exact time.

3. Using the sidebar tools menu located at the left side of the browser's window viewport, click on the layer's icon button and click between the available data layers to toggle them on-off.

4. By using the Filtering icon button that's underneath layer's button try to filter the vessels by their type. Toggle between any desired vessel type and keep those needed. Do the same for event types accordingly.

5. Find out how many events (simple and Forecast ones) are in this particular sea area. This information is kept under the Events icon button in the left sidebar. Hit the Events icon having the events panel list showing up. There are two tabs, "Simple Events" & "Forecast Events". Now hover upon the events list and click on any event card. The map is zooming in into the event's specific location. Try this process for various types of events.

6. Now span across the map and hit on a vessel marker. In the popup box showing the vessel's description, there's a 'Past track' button. Try clicking on this button to investigate the vessel's past track (trajectory) and look closely to find out any past events that may be occurred in that past trajectory line.

7. Congratulations! Following the above steps you've had an overview of the current maritime situation of this case area by investigating all the current vessels location and occurred events as well as you were able to track any vessels that are about to be engaged into illegal activities such as illegal Fishing.

Campaign Mailing List

▲ Back

JMLGen_Logo_RGB.svg

DECODER - JmlGen

DEveloper COmpanion for Documented and annotatEd code Reference

▼ Click for campaign details and rewards

DECODER builds an Integrated Development Environment (IDE) that combines information from different sources through formal and semi-formal models to deliver software project intelligence to shorten the learning curve of software programmers and maintainers and increase their productivity.  Developers will deliver high quality code that are more secure and better aligned with requirements and maintainers will immediately know what has been done, how and with what tools.

 

JmlGen

Estimated Test Duration:

1 hour

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Developers

Beta tester level:

Beginner

Campaign objectives

JmlGen generates JML annotations from what can be guessed out of a java project: the result, a JML-annotated project, can then be processed by JML tools, like the OpenJml program verification tool.
Check the current functions offered and try them on your own java software.

Requirements for this campaign

JmlGen takes as input a java project (the project root directory, like the one out of a "git clone"), and generates JML in java files located in a specified destination directory.

To install and build JmlGen, you will need:
- A java environment (java11 minimum)
- Maven

The test described below is for Linux, but should be adaptable to other platforms (at least, paths in configuration files would probably have to be adapted).

Beta test instructions and scenario

Install JmlGen

JmlGen is located at https://gitlab.ow2.org/decoder/jmlgen .

To download and build it:

$ git clone https://gitlab.ow2.org/decoder/jmlgen.git
$ cd jmlgen
$ mvn clean install

Minimal test (to check JmlGen works)

You can then run a test example, as follows:

$ java -jar target/jmlgen-0.0.1-SNAPSHOT.jar src/main/resources/jmlgen.properties

Some java code with JML inside should be produced in /tmp/jmlgen (the original java code is in src/test/java/eu/decoder/sample_jmlgen/ directory).
To see it, for example:

$ cat /tmp/jmlgen/src/test/java/eu/decoder/sample_jmlgen/Sample*

(note that you may customize the output directory by editing src/main/resources/jmlgen.properties, and changing the value of "target" property).

Apply JmlGen to a real project

We will take as example the OW2 sat4j project (https://gitlab.ow2.org/sat4j).

You may run the test in /tmp:

$ cd /tmp
$ git clone https://gitlab.ow2.org/sat4j/sat4j.git

Now create a JmlGen configuration file (let's say, /tmp/sat4j.properties), with the following content (you may copy/paste it):

root: /tmp/sat4j
destination: /tmp/sat4j-JML
sourcepath: org.sat4j.br4cp/src/main/java:org.sat4j.core/src/main/java:org.sat4j.intervalorders/src/main/java:org.sat4j.maxsat/src/main/java:org.sat4j.pb/src/main/java:org.sat4j.sat/src/main/java:org.sat4j.sudoku/src/main/java

(Note: sourcepath lists all source folders, separated with colons; in many java projects, it would simply be set to "src/main/java").

Go back to the directory where you installed JmlGen, and run it:

$ java -jar target/jmlgen-0.0.1-SNAPSHOT.jar /tmp/sat4j.properties

You should see logs in the console, that detail where JML annotations have been inserted: go in some corresponding files (under /tmp/sat4j-JML) to discover JML annotations.

For example, the following command should display some JML annotations inserted in the SuDoku.java sample of Sat4j:

$ cat /tmp/sat4j-JML/org.sat4j.sudoku/src/main/java/org/sat4j/apps/sudoku/SuDoku.java | grep "/*@"

Note some annotations can be of immediate interest: for example, "non_null" annotations reflect that a method result should be tested to null, as the method was called without check (for a call like "method1().method2()", JmlGen would annotate "method1()" as "non_null", which denotes a risk of null pointer exception). A plain text search for "non_null" annotations, without any analysis tool, can be profitable.

Apply JmlGen to your own project

Now you are ready to use JmlGen on your own! And report bugs/issues at https://gitlab.ow2.org/decoder/jmlgen/-/issues.

When done, use any 3rd-party JML tool (like OpenJml) to perform analysis of your java code, now instrumented by JmlGen.

Campaign Mailing List

▲ Back

Intuite_logotype_transp.png

INTUITE_AI

Our mission is to unleash the power of sensitive data. We created software capable of generating realistic artificial data to enable safe data sharing between companies.

▼ Click for campaign details and rewards

There is an ongoing conflict around customers data. On the one hand, customers want their privacy protected and fear the adverse consequences that might arise from improper or malevolent use of their data. On the other hand, companies have the need to analyze their customer’s data (become “data driven”) to remain competitive globally.
The de-facto standard technique to mitigate this problem, data anonymization, was proven to be inadequate in truly preserving customer’s privacy, while simultaneously reducing data utility since its principle of operation is based on information destruction.
We propose a novel approach for privacy preserving data analysis based on synthetic data. We plan on using a new trend in machine learning to create a dataset that’s fully synthetic, i.e. does not contain data of real people or entities, but yields the same results upon statistical analysis. Because it does not contain real data, it is privacy-preserving and GDPR compliant.
Synthetic datasets surpass anonymized data both in terms of security and utility. We want to make this technique available to the market so that customers can benefit from added safety regarding their data while companies can increase their competitiveness.

Project website:

 

INTUITE_AI - Generate realistic artificial data

Estimated Test Duration:

30 mins to 1 hour

Reward for this campaign:  

30€

Incentives

1.T-shirts, stickers, pens
2.Communication promotion via our social media channels
3.Also, Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner, Intermediate

Campaign objectives

The objective of this beta test is not focused on assessing the data generation performances but instead, the goal is to assess whether all the system functionalities work fine.

Requirements for this campaign

The product can be tested ideally by everyone. However, a bit of data knowledge would be beneficial.

Beta test instructions and scenario

The software aims at creating artificial tabular data. The system takes as input a table and automatically trains a Machine Learning model capable of generating an artificial copy. The synthetic version retains the same statistical properties but is void of sensitive information.

The process is composed of 4 steps:

1.Register and login (http://app.intuite.ai/)
After filling in the registration form, you will receive a link to confirm your email. Upon verification of your email, a password will be sent to you.
2.Load Data
3.Train model
4.Synthesize new data
5.Download data

The user guide is available at:

https://docs.google.com/document/d/1QEh5upzahsgrKTORPP9UBRp_EZiky1rJ35fAk2zNMU8/edit?usp=sharing

Campaign Mailing List

▲ Back

CROSSMINER logo.svg

CROSSMINER

CROSSMINER enables the monitoring, in-depth analysis and evidence-based selection of open source components, and facilitates knowledge extraction from large open-source software repositories.

▼ Click for campaign details and rewards

CROSSMINER enables the monitoring, in-depth analysis and evidence-based selection of open source components, and facilitates knowledge extraction from large open-source software repositories.

Project website:

 

CROSSMINER Dashboard testing

Estimated Test Duration:

30 minutes

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner, Intermediate, Advanced

Campaign objectives

introduce dashboards to OSS projects stakeholders

Requirements for this campaign

Web Browser

Beta test instructions and scenario

  • Open http://beta.crossminer.org/app/kibana#/dashboards
  • Select a dashboard, for instance "scava-overview"
  • Select a project from "Project selection" view
  • This has the effect to filter data for the selected projects in other views.
  • you can manage filters from the top left bar of the window.
  • On top right of the window, you can edit the time selection (by default 10 years)
  • Click on "Dashboard" from the left pane to select another dashboard.
  • The same logic can be applied on all dashboards
  • Last part is to understand how dashboards are created. Let's follow this short tutorial: https://www.reachout-project.eu/view/Crossminer/kibana

Campaign Mailing List

▲ Back

logo_positivo.png

Cross-CPP

One-stop data shop: Provides a single point of access to data streams from multiple smart products in easily accessible non-proprietary data formats

▼ Click for campaign details and rewards

Ecosystem​

✓  Driven by the needs of Data Owners, Data Providers and Data Customers

✓  Brand independent, open platform with standardized interface → Highly attractive for Service Providers

✓  Linking CPP data from different sectors enables higher quality content and new services world

✓  Economical solution for all value chain partners, due to a greater amount of data customers

​✓  Data Providers can profit from Innovation Potentials by thousands of external experts

User Engagement

✓   Empowers CPP owners to exploit their most valuable asset in the Internet of Things – their CPP  data

✓  The owner can fully control which data he provides to which Service Provider

 

UI and UX of Cross-CPP data-marketplace Front-end Application

Estimated Test Duration:

15-20 minutes

Incentives

As a recognition for your efforts and useful feedback, you will be added as a Cross-CPP contributor within our Website. This offer is limited to the beta testers interacting with the team, by 15 October 2020. You will be contacted individually for contribution opportunities. Please, provide a valid contact email during the survey phase.

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner

Campaign objectives

Cross-CPP project have released its integrated Final Prototype of the Data Marketplace (AGORA). It offers a privacy and secure platform to trade-off (buy and sell) Vehicles and Building datasets. 

The objective of this campaign is to get feedback from the User Experience (UX) and User Interface (UI) of the frontend application. We would provide detailed instructions to follow the processes from A) Service Provider (Digital Company, Data-driven startup, etc.) to create "Data Requests" or B) Data Owner to accept these "Request" and transform into Offers and Contracts. 

We would like your feedback to test AGORA solution for improving our experience with a wide range End-Users. Don't lose this opportunity to take part of this EU challenge!

Requirements for this campaign

- Basic Knowledge of web browsing
- Internet connection
- Use preferably Google Chrome

Beta test instructions and scenario

A) For Service Providers (Looking for acquiring Data inside the Marketplace):

  1. Go to https://ng8.datagora.eu/login and click on "Sign in" button on the right upon side of the web
    2. Enter in the EMAIL: "serviceprovider1@test.com" and this password
    3. In the left side of the Application you see "Main Menu", and you click in  "Catalogue section". Apply some filters to the Data Signals Catalogue and search for any signal you are interested.
    4. Go to "Data Discovery". In this view, you can configure and personalize the type of "Data Requests" you aim to retrieve from the Marketplace. Apply and set any filter you consider interesting for your service and finally press on "Discovery". E.g. Signal Type: Vehicle Speed (or more) / Add Suggestions (if needed) / Duration: All years / Location: Spain
    5. Check the "Discovery Results" of available data in the marketplace. Then, check the "Analytics" or "Context Filtering" to get access to all the functionalities. Then, click on "Create Data Request".
    6. Congratulations! You have created your first Data Request, just go to "Main Menu" and click in "Data Wallet" and you can see a drop menu with "Data Requests" and Data Transactions". Click on "Data Requests" to see the one you have created.  

B) For Data Owners (Accepting Data Requests and Exchange my data generated):

  1. Go to https://ng8.datagora.eu/login and click on "Sign in" button on the right upon side of the web
    2. Enter in the EMAIL: "ownertest@test.com" and this password
    3. Go to the upper part of the Application, and firstly select the type of Device from you would accept to share your data. Two categoires "Vehicles" and "Buildings".
    4. In the left side of the Application you see "Main Menu", and you click in the drop menu  "Data Wallet", and you will see "Available Data Requests" and "Accepted Data Requests", "Data Collected" and "Transaction Summary".
    5. Click on "Available Data Requests" and you would find out all the "Data Requests" published by interested Service Providers. You can accept or decline, you as a Data Owner has the choices to share or not your data.
    6. Finally, if you click on "Accepted Data Requests" you will see, the full list of Accepted Requests I have grant my permission to access to data. If you are no longer interested in it, you can withdraw your consent to share data, and the Offer is terminated.

Campaign Mailing List

▲ Back

IOF2020 Logo_Payoff_RGB.png

IOF2020  Use Case Beverage Integrity Tracking

Beverage Integrity Tracking is a system based on the IoT technologies, that will allow to monitor the transport conditions and to open a direct communication channel from the producer to the retailer.

▼ Click for campaign details and rewards

What is B.I.T. (Beverage Integrity Tracking)?
Tracking wines and other beverages along the transportation chain is the main goal of B.I.T. project.
Increasing economic and strategic relevance of export markets impose to producers to gain control on transportation conditions of their goods and to establish direct contacts with final clients.
B.I.T. uses Internet of Things technologies to obtain data on shipping conditions and on final client satisfaction, allowing beverage producers to exactly know if, when and where accidents occur during transportation (excess heating, low temperatures), and to receive feedbacks from final retailers (comments and/or complaints from wine shops & restaurant clients).
The device code is coupled with a web page or documents corresponding to the wine in the box, reporting all information and marketing arguments the producer wants to communicate to retailers.

 

Beverage Integrity Tracking Test Beds Phase 2

Estimated Test Duration:

2 hours

Incentives

Producers can offer a product refund to their retailers (25% of the value topped at 45 euro per beverage box) which will be paid for by IOF2020.
Also, producers participating in the campaign with take part in the ReachOut Lottery and may win a prize.

Target beta testers profile:

Business users

Beta tester level:

Beginner, Intermediate

Campaign objectives

The testing producers are asked to upload to the Beverage Integrity Tracking platform information about the shipment, product marketing and thresholds of low/high temperatures.
Testing producers will then communicate the retailer contact details to the IOF2020 project team, who will ask them to download the App to transfer shipment data on the platform, to download product marketing information and if necessary to give a feed back on product at its arrival.
The objective of the campaign is to check that the system works, to collect their feedback about their opinion about system feature and utilisation and to collect data about temperatures during shipments.

Requirements for this campaign

Testers should sell wine to retailers over the summer/beginning of autumn period.

Beta test instructions and scenario

Producers should use the system with real shipments (enter data in platform, activate data logger) that can arrive at destination over the summer/beginning of autumn period.
Producers should put the IOF2020 project team in contact with the retailers willing to close the loop, by downloading the APP and providing their feedback.

▲ Back

IOF2020 Logo_Payoff_RGB.png

IOF2020 Use Case Big Wine Optimization - Remote Wine Analysis product

Optimizing the cultivation and processing of wine by sensor-actuator networks and big data analysis within a cloud framework.

▼ Click for campaign details and rewards

What is the objective of the Remote Wine Analysis System?
Perform remote frequent and inexpensive characterization of wine composition in order to preserve maximum expression of grape quality potential throughout winemaking phases.
How does it work?
A spectrophotometer reader – operating in the IR spectrum range IR – able to detect absorbance data of a wine sample in the winery, and send them to the cloud to be elaborated through a calibration curve based on a vast database, finally providing the winery the desired compositional parameters.

 

Remote Wine Analysis Phase 2

Estimated Test Duration:

2 hours

Incentives

Testers participating in the campaign with take part in the ReachOut Lottery and may win a prize.

Target beta testers profile:

Business users

Beta tester level:

Beginner, Intermediate

Campaign objectives

The testers are asked to use the system performing analysis during harvest period.

Requirements for this campaign

Testers should perform analysis during harvest period.

Beta test instructions and scenario

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

▲ Back

articonf-logo-block.jpg

ARTICONF

A novel set of trustworthy, resilient, and globally sustainable decentralised social media services

▼ Click for campaign details and rewards

ARTICONF addresses issues of trust, time-criticality and democratisation for a new generation of federated infrastructure, to fulfil the privacy, robustness, and autonomy related promises that proprietary social media platforms have failed to deliver so far.
In order to test the first demo with two tools (TIC and CONF):

For more information about the project and latest news  please visit https://articonf.eu

Project website:

 

ARTICONF Crowd Journalism Use Case - Final Version

Estimated Test Duration:

30 min

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users

Beta tester level:

Beginner

Campaign objectives

The main objective of the campaign is to test and evaluate a final version of the crowd streaming ecosystem developed in the context of the ARTICONF project.

The Crowd Journalism ecosystem is composed of three main components.
- A Mobile Application developed for live capture and streaming of news events in which citizens and journalists can transmit in real-time a breaking news event that is happening in a particular location;
- A web-based Classifier that aggregates the multiple live news video feeds from the citizens and displays them in a four-player multiviewer where they can be classified according to three different criteria: impact, trustiness and level of information;
- A web based Marketplace in which the creators of the news videos can sell them to potential buyers (citizens, news companies). The transactions are made using virtual tokens that can be exchanged for products or services.

The Crowd Journalism platform sits on top of a blockchain-based infrastructure in order to ensure anonymity, secure and immutable transactions.

In this sense, the three main components will be evaluated in this test:
- Mobile crowd streaming application;
- Multiviewer/Classifier/Editor;
- Marketplace.

Requirements for this campaign

The requirements for the testing are the following:

- Personal computer with Internet connection
- At least one smartphone with Internet connection and GPS (Wi-Fi, 4G)
- Web Browser (Chrome, Firefox)

It is not necessary to install any component of the ecosystem on your own device.

Beta test instructions and scenario

In order to test the ecosystem please go to the feedback questionary, where the instructions are provided.

Campaign Mailing List

 

ARTICONF v1

Estimated Test Duration:

20-30 mins

Reward for this campaign:  

30€

Target beta testers profile:

Developers

Beta tester level:

Intermediate, Advanced

Campaign objectives

The objective of this campaign is to test the first version of ARTICONF toolset as well as to collect feedback from DApp (Distributed Application Developers) which can be used to adapt the services developed to the market requirements

Requirements for this campaign

  • Personal Computer.
  • Internet Connection.
  • Web Browser - Firefox.
  • Knowledge:
    • Expertise in deployment and with blockchain is not needed to understand the basics about how the toolset works and its benefits.
    • Knowledge in Hyperledger Fabric concepts is required to configure the blockchain network.

Beta test instructions and scenario

Introduction

The project presents four tools integrated with a user interface to interact with them:

  1. Trust and Integration Controller (TIC): enable developers to use and configure a robust Hyperledger Fabric Blockchain Network, allowing users to use the logic, privacy and data consensus in their applications.
  2. Co-located and Orchestrated Network Fabric (CONF): in its current version, is able to automatically deploy the Hyperledger Fabric Network previously mentioned along with the APIs to interact with it. 

To use the tools a user interface has been designed and provided for the testing. This interface allows the utilisation of the tools, and by means of a simple application is able to transfer token between two peers using the blockchain network behind. Moreover, through the navigator, it enables developers or end users to know how the chain is updated after every transaction and to configure the organization in the Hyperledger network.

Instructions

In order to test the provided scenario:

  1. Navigate to following URL: http://tac.uist.edu.mk/beta/testing
    In the interface, sign up to create your own credentials:
  2. Follow the provided instructions when signing up in the previous URL

Campaign Mailing List

 

ARTICONF Crowd Journalism Use Case

Estimated Test Duration:

20 min

Target beta testers profile:

Business users

Beta tester level:

Beginner

Campaign objectives

The main objective of the campaign is to test and evaluate a first version of the crowd streaming ecosystem developed in the context of the ARTICONF project.

The Crowd Journalism ecosystem is composed by three main components.
- A Mobile Application developed for live capture and streaming of news events in which citizens and journalists can transmit in real time a breaking news event that is happening in a particular location.
- A web-based Classifier that aggregates the multiple live news video feeds from the citizens and displays them in a four-player multiviewer where they can be classified according to three different criteria: impact, trustiness and level of information
- A web based Marketplace in which the creators of the news videos can sell them to potential buyers (citizens, news companies). The transactions are made using virtual tokens that can be exchanged for products or services.

The Crowd Journalism platform sits on top of a blockchain based infrastructure in order to ensure anonymity, secure and immutable transactions.

Two main components will be evaluated in this test:
- Mobile crowd streaming application
- Multiviewer/Classifier/Editor

Requirements for this campaign

The requirements for the testing are the following:

- Personal computer with Internet connection
- At least one smartphone with Internet connection and GPS (Wi-Fi, 4G)
- Web Browser (Chrome, Firefox)

It is not necessary to install any component of the ecosystem in your own device.

Beta test instructions and scenario

In order to test the ecosystem please go to the feedback questionary, where the instructions are provided.

Campaign Mailing List

▲ Back

energylogo.png

Energy Marketplace

A blockchain based secure peer to peer energy trading platform.

▼ Click for campaign details and rewards

This project is an initial prototype of a blockchain based peer to peer energy trading platform. This project is one of the use cases of the EU project ARTICONF. This initial prototype implements a blockchain based peer to peer trading platform and contains the following basic functionalities:

- Register as a new user.
- Login with registered credentials.
- View the personal energy profile statistics.
- Users can post advertisements to sell energy.
- Users can  buy energy from the Energy Marketplace.
- Users can view their history of all transactions
- Energy service Provider(Admins) can access the Blockchain Explorer to get an overview of all the energy transactions.   

This prototype implements smart contracts which saves all the transactions associated with smart meter readings, energy advertisements, marketplace transactions and user energy profile to the blockchain.

Project website:

 

ENERGY MARKETPLACE

Estimated Test Duration:

15 minutes

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner, Intermediate

Campaign objectives

This project is an initial prototype of a blockchain based peer to peer energy trading platform. This project is one of the use cases of the EU project ARTICONF

The goal of this campaign is to gather feedback on testing the prototype functionalities.

Requirements for this campaign

The prototype can be accessed from a web browser (no other requirements).

Beta test instructions and scenario

This initial prototype implements a blockchain based peer to peer trading platform and contains the following basic functionalities:
- Register as a new user.
- Login with registered credentials.
- View the personal energy profile statistics.
- Users can post advertisements to sell energy.
- Users can  buy energy from the Energy Marketplace.
- Users can view their history of all transactions
- Admins can access the Blockchain Explorer , to get an overview of all the energy transactions.   

This prototype implements smart contracts which saves all the transactions associated with smart meter readings, energy advertisements, marketplace transactions and user energy profile to the blockchain.

Before you answer the feedback questionnaire, please follow these testing Instructions.

As a reminder here is the Web App Link.

One you have performed the test, please fill in the FEEDBACK QUESTIONNAIRE on ReachOut

Campaign Mailing List

▲ Back

Fasten_web3_logo.svg

FASTEN

FASTEN project provides a more accurate dependency management solution by diving at the functions call level.

▼ Click for campaign details and rewards

The FASTEN project is developing an intelligent software package management system that will enhance robustness and security in software ecosystems.
FASTEN addresses the operational and compliance risks associated to dependencies on networks of external open source software libraries.
To solve these issues, FASTEN introduces a fine-grained, method-level, tracking of dependencies on top of existing dependency management networks.
The project is developed by a consortium of seven partners and has received funding from the European Union’s Horizon 2020 research and innovation programme.
The project started in January 2019 and will run until December 2021.

 

FASTEN Maven plugin

Estimated Test Duration:

1 hour

Target beta testers profile:

Developers

Beta tester level:

Intermediate, Advanced

Campaign objectives

FASTEN published a first stable release of the Maven plugin that allow developers to benefit from FASTEN knowledge base information to identify potential security, license,... issues in dependencies and transitive dependencies of their Java projects.
Purpose of this campaign is to test the accuracy of information reported by FASTEN and ease of use of the Maven plugin.

Requirements for this campaign

A Java project that is using Maven as build/packaging tool.

Beta test instructions and scenario

Add the FASTEN Maven plugin to your Java project using documentation available at https://github.com/fasten-project/fasten-maven-plugin/wiki.
If your project has a know vulnerability in one of his dependency the Maven plugin should be able to identify it.

Campaign Mailing List

 

Java call graph generator

Estimated Test Duration:

1 hour

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Developers

Beta tester level:

Intermediate, Advanced

Campaign objectives

We want to validate the accuracy of Java call graph generated by FASTEN tooling.

This is a first milestone to validate a subpart of FASTEN project. Another campaign will be launched later to test the whole project.

Requirements for this campaign

You need to have that Java 11 (JRE) installed on your computer.

For the final stage of the test you will need to have your own Java application (and its dependencies).

Beta test instructions and scenario

You will find below a description of what you need to do to successfully run and experiment with Java call graph generator beta version.

Prerequisites

First make sure that you have Java 11 or higher installed.

Next you need to download the Java call graph generator.

Verify that you are good to by executing the Java call graph generator: java -jar javacg-opal-0.0.1-beta1-with-dependencies.jar
This should print the syntax help message.

You are now ready to perform a first test.

Testing Scenarios and instructions

The campaign includes 3 different test scenarios, presented from the simplest to the more advanced. We recommend that you follow instructions from top to bottom, but each scenario is independent so you can safely skip one.

-Test #1: execution with basic local application and dependency

-Test #2: execution with Maven public artifact 

-Test #3: execution with your own project

To get started, please go to the testing instructions and guidelines

Thanks for evaluating the FASTEN call graph generator and stay tuned for more FASTEN beta testing campaign (including Maven plugin)!

Campaign Mailing List

▲ Back

logo-modeliosoft.png

CROSSMINER Softeam Use Case Evaluation

Internal Evaluation of CROSSMINER platform in context of Softeam Use Case

▼ Click for campaign details and rewards

CROSSMINER enables the monitoring, in-depth analysis and evidence-based selection of open source components, and facilitates knowledge extraction from large open-source software repositories.

 

CROSSMINER Final Evaluation

Estimated Test Duration:

10 minutes

Target beta testers profile:

Developers

Beta tester level:

Intermediate, Advanced

Campaign objectives

Collect feedback of Modeliosoft development team (Softeam) about the deploiment and usage of CROSSMINER platform.

Requirements for this campaign

Modeliosoft development team  only

Beta test instructions and scenario

Answer to the questionnaire based on the experiments on the CROSSMINER platform  of the last 3 months

▲ Back