03 December
European Big Data Value Forum 2021
The ReachOut BetaCenter platform will be showcased at the European Big Data...
01 December
HELIOS BYOSMA campaign is open
Developers are invited to try the Helios Bring Your Own Social Media App
23 November
BDVA Data Plaform Workshop
EU Data plaform projects are now ready to start their beta-testing campaigns...
09 November
OSXP 2021 SlideDeck
09 November
OW2 is showcasing ReachOut at OSXP
OW2 and the ReachOut project are fostering developers and business users to...

ReachOut for Project Leaders

You are a project leader?

Set-up a beta-testing campaign
 for your project!

  • Register your project
  • Arrange a training session
  • Promote the campaign
  • Learn from feedback

Improve your software
Align with market expectations

ReachOut for Beta Testers

You are a beta tester?

Check out Existing Campaigns

Participate in research project
 beta-testing campaigns!

  • Choose your beta-testing job
  • Execute the tutorial
  • Answer feedback questions
  • Pick-up your reward

Look inside state-of-the-art software
Enhance your professional network


Check out these campaigns

Wayeb

Wayeb

Wayeb is a Complex Event Processing and Forecasting (CEP/F) engine written in Scala. It is based on symbolic automata and Markov models.

▼ Click for campaign details and rewards

Wayeb is a Complex Event Processing and Forecasting (CEP/F) engine written in Scala. It is based on symbolic automata and Markov models.

 

Wayeb

Starts on:

01/05/2021

Ends on:

31/12/2021

Estimated Test Duration:

30 min

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Developers

Beta tester level:

Beginner

Campaign objectives

As Wayeb is going to be released in the next months we need to make sure that all of its functions work properly in the maritime and in the bio domain. 

Requirements for this campaign

In order to build Wayeb from the source code you need to have Java SE version 8 or higher and SBT installed in your system. Java 8 is recommended.
More details about the building process you can find here https://github.com/ElAlev/Wayeb/blob/main/README.md

Beta test instructions and scenario

The tests that should be done are about the building, the recognition and the forecast processes.

1)Building: First donwnload Wayeb from here https://github.com/ElAlev/Wayeb. Assuming $WAYEB_HOME is the root directory of Wayeb:

$ cd $WAYEB_HOME

Then build a fat jar:

$ sbt assembly
If it prints a success message, it passes the test.

2) Recognition: In $WAYEB_HOME/data/demo/data.csv you may find a very simple dataset, consisting of 100 events. The event type is either A, B or C. In $WAYEB_HOME/patterns/demo/a_seq_b_or_c.sre you may find a simple complex event definition for the above dataset. It detects an event of type A followed by another event of type B or C. If we want to run this pattern over the stream, we must first compile this pattern into an automaton (make sure you have created a results folder under $WAYEB_HOME):

$ java -jar cef/target/scala-2.12/wayeb-0.2.0-SNAPSHOT.jar compile --patterns:patterns/demo/a_seq_b_or_c.sre --outputFsm results/a_seq_b_or_c.fsm

Now, results/a_seq_b_or_c.fsm is the produced serialized finite state machine. Note that we also provided as input a declarations.sre file. This file simply lets the engine know that the three predicates IsEventTypePredicate(A), IsEventTypePredicate(B) and IsEventTypePredicate(C) are mutually exclusive (i.e., an event can have only one type). This helps the compiler create a more compact automaton. We can use this FSM to perform event recognition on this simple dataset:

$ java -jar cef/target/scala-2.12/wayeb-0.2.0-SNAPSHOT.jar recognition --fsm:results/a_seq_b_or_c.fsm --stream:data/demo/data.csv --statsFile:results/recstats

If it prints information about the throughput and the number of matches, it recognizes the pattern in the stream and it passes the test. 

3) Forecasting: For forecasting, we first need to use a training dataset in order to learn a probabilistic model for the FSM. For this simple guide, we will use $WAYEB_HOME/data/demo/data.csv both as a training and as a test dataset, solely for convenience. Normally, you should use different datasets.

We first run maximum likelihood estimation:

$ java -jar cef/target/scala-2.12/wayeb-0.2.0-SNAPSHOT.jar mle --fsm:results/a_seq_b_or_c.fsm --stream:data/demo/data.csv --outputMc:results/a_seq_b_or_c.mc

The file results/a_seq_b_or_c.mc is the serialized Markov model. The final step is to use the FSM and the Markov model to perform forecasting:

$ java -jar cef/target/scala-2.12/wayeb-0.2.0-SNAPSHOT.jar forecasting --modelType:fmm --fsm:results/a_seq_b_or_c.fsm --mc:results/a_seq_b_or_c.mc --stream:data/demo/data.csv --statsFile:results/forestats --threshold:0.5 --maxSpread:10 --horizon:20 --spreadMethod:classify-nextk

The last command should return some classification statistics like precision, f1 and accuracy.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ Back

HeliosLogo_final.png

Build your own Social Media APP

HELIOS provides a toolkit for P2P Social Media applications. Come and build your own social media APP!

▼ Click for campaign details and rewards

We are providing tools to develop novel social media applications for Android. The tools do not only contain basic messaging, but also other features like communication in contexts and information overload control. There also other optional modules available in the tools.

The aim now is to download HELIOS tools and try out how it goes. You need basic Android programming skills.

To get started, we are providing you a sample APP to build, with source codes in GitHub. It should be built with a recent version of Android Studio, to a minimum Android version of 9.

Tutorial videos are available here: https://helios-social.com/helios-for-devs/tutorials/

You'll find detailed how-to-build instructions in GitHub
 https://github.com/helios-h2020/h.app-TestClient

-

HELIOS has received funding from the European Union’s Horizon 2020 Research and Innovation Programme under Grant Agreement N° 825585

 

Build your own Social Media APP

Starts on:

30/11/2021

Ends on:

28/12/2021

Estimated Test Duration:

4 hours

Reward for this campaign:  

60€

Incentives

By participation, you are in the forerunners of developing next generation of Social Media that no longer depends on corporations that currently run social media platforms.

As a sign of gratitude, we will offer 12x 60€ rewards for trying out the toolkit and reporting your experiences.

We will close the campaign after 10 participants have answered the survey. Therefore, try to proceed swiftly to guarantee your reward.

Also, Beta Testers will be offered to be added to the ReachOut "Hall of Fame", and will automatically take part in the ReachOut end-of-project Super Prize.

Target beta testers profile:

Developers

Beta tester level:

Intermediate

Campaign objectives

HELIOS has released source codes to build P2P social media applications. We would like to have feedback on building a sample APP from the sources.

Requirements for this campaign

- Android Studio
- Android phone, version 9 and above, with network access (preferable)
- basic Android programming skills

Tutorial videos are available here: https://helios-social.com/helios-for-devs/tutorials/

You'll find detailed how-to-build instructions in GitHub
https://github.com/helios-h2020/h.app-TestClient

Beta test instructions and scenario

1) Get familiar with HELIOS instructions
2) Download sample codes and relevant libraries
3) Build the sample APP into a APK (you may modify the sample if you wish)
4) Install the APK to the Android phone (in the absence of a phone, you can use an emulator)
5) Verify with the Android phone that the app launches OK.
6) Send a unique message (such as a random number) to "BugChat" channel in the APP and do the following steps:
  - In the app, open the options menu (the three dots in the right-hand side after title)
  - Tap “Discover others”
  - Search for your nick name and WRITE DOWN the first 6 characters of your ID (it is in the format “nickname @ID”)
7) Fill in the survey and let us know
  - what was the message you sent, with rough time/date information, and
  - the first 6 characters of your ID

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ Back

DECODER_Logo_site.svg

DECODER - doc2json

DEveloper COmpanion for Documented and annotatEd code Reference

▼ Click for campaign details and rewards

DECODER builds an Integrated Development Environment (IDE) that combines information from different sources through formal and semi-formal models to deliver software project intelligence to shorten the learning curve of software programmers and maintainers and increase their productivity.  Developers will deliver high quality code that are more secure and better aligned with requirements and maintainers will immediately know what has been done, how and with what tools.

 

doc2json

Starts on:

07/06/2021

Ends on:

31/12/2021

Estimated Test Duration:

10 minutes

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner, Intermediate

Campaign objectives

doc2json extracts text/data from word/openoffice/excel documents to json format.

With appropriate parsing algorithms provided by the user, it can extract data from any structured documentation. The samples directory contains algorithms to extract the text of a word/openoffice document into a json format that nests the sections of the document. It also contains algorithms to extract data from invoices following the same openoffice template.

The innovative part of this project consists in translating a user's master algorithm that controls the events coming from the documentation into a slave algorithm that can be interrupted and introspected. The resulting parsing library contains many `goto`s to reflect the state machine model as the result of the translation. This part still has bugs; it nevertheless works on the 4 parsing algorithms of the project.

The goal of this campaign is to make sure that the first version of doc2json functions as expected.

Feedback is expected about:

  • Potential usages of doc2json (on a database of documents)
  • The effectiveness and the current limitations of doc2json

Check the current functions offered and try them at the end on your own documents.

Requirements for this campaign

doc2json takes as input a word/openoffice document (like the internal documentation), and extracts the text/the data into json file whose format can be specified by the user.

To install and build doc2json, you will need:
- a Linux environment (only Ubuntu 20.04 has been tested) with the following packages (git, zlib1g-dev, g++, libclang-dev - these packages also exist on Windows, MacOS and a port on these environments is planned in the future)
- or the ability to create Docker images and execute Docker container

The test described below is for Linux and/or docker.

Beta test instructions and scenario

Install doc2json

doc2json is located at https://gitlab.ow2.org/decoder/doc_to_asfm

Linux installation

The application requires zlib https://zlib.net/ to retrieve the content of the documents and Clang Tools https://clang.llvm.org/docs/ClangTools.html to convert the user's parsing algorithm into an interruptible reading algorithm.

To install these libraries, you can type the following commands:

> sudo apt-get install zlib1g-dev clang libclang-dev > apt list --installed "libclang*-dev"

If the clang version is less than clang-10 (for instance clang-6), the next cmake build process may fail and you need to update to clang-10 with the following commands

> sudo apt-get install clang-10 libclang-10-dev > sudo apt-get purge --auto-remove libclang-common-6.0-dev > sudo ln -s /usr/bin/clang-10 /usr/bin/clang

You can also check that llvm provides its own header files to the Clang Tools

> llvm-config --includedir

should answer a path that contains llvm-xx. Fedora, for instance, returns /usr/include, which prevents the Clang Tools to find some headers like <stddef.h> that are required for string manipulation during the source to source transformation. In such a case, you can try the docker installation that is more robust.

Please note that if after having tested doc2json, you need to revert to your original clang 6 version, just type

# do no use these commands before having built the project and the algorithms libraries > sudo rm /usr/bin/clang > sudo apt-get install clang

Then you can download and build the project

> git clone git@gitlab.ow2.org:decoder/doc_to_asfm.git doc2json > cd doc2json > mkdir build > cd build > cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=$PWD .. > make -j 4 # make test is optional = to check if doc2json works on the internal documentation > make test > make install

Docker installation

To download and build it (Docker users) :

> git clone git@gitlab.ow2.org:decoder/doc_to_asfm.git doc2json > cd doc2json > sudo docker build -t doc2json_img . > docker run --name doc2json -it doc2json_img bash # in the docker container > cd /doc2json

Minimal test (to check doc2json works)

Under a Linux system, you can then run some test examples in the build directory, which is also the installation directory, you can type

> ./bin/doc2json ./share/doc2json/libInvoiceReader.so -c ./share/doc2json/config-invoice.xml ../test/invoice.ods -o test/invoice.json > ./bin/doc2json ./share/doc2json/libTextWordReader.so -c ./share/doc2json/config-text-word.xml ../test/StructuredDocument_french.docx -o test/StructuredDocument_french_word.json > ./bin/doc2json ./share/doc2json/libTextOpenofficeReader.so -c ./share/doc2json/config-text-openoffice.xml ../test/StructuredDocument_french.odt -o test/StructuredDocument_french_openoffice.json

Under the Docker container, you can run the same test examples from the /doc2json directory

> ./bin/doc2json ./share/doc2json/libInvoiceReader.so -c ./share/doc2json/config-invoice.xml src/test/invoice.ods -o src/test/invoice.json > ./bin/doc2json ./share/doc2json/libTextWordReader.so -c ./share/doc2json/config-text-word.xml src/test/StructuredDocument_french.docx -o src/test/StructuredDocument_french_word.json > ./bin/doc2json ./share/doc2json/libTextOpenofficeReader.so -c ./share/doc2json/config-text-openoffice.xml src/test/StructuredDocument_french.odt -o src/test/StructuredDocument_french_openoffice.json

to extract the content of the documents in the json format. You can open the document ../test/StructuredDocument_french.docx and compare its content with the result of the extraction, that is the file test/StructuredDocument_french_word.json.

Then, in the build directory for Linux users and in the /doc2json/src directory for Docker users

diff test/StructuredDocument_french_openoffice.json test/StructuredDocument_french_word.json

should show no differences between the two json extracted files even if the origin format (word versus openoffice/opendocument) is very different.

A utility create-reader.sh is provided to generate a parsing library from a custom user's parsing algorithm. Hence the command (in the build directory for Linux users and in the /doc2json directory for docker users)

./bin/create-reader.sh -I . share/doc2json/InvoiceReader.cpp

generates again the parsing library share/doc2json/libInvoiceReader.so from the parsing algorithm share/doc2json/InvoiceReader.cpp. The transformation into a state machine model is generated in the file share/doc2json/InvoiceReaderInstr.cpp.

Apply doc2json to your documents (a text with a title and sections, subsections, subsubsections, ...)

We now suppose that your document is named file.docx or file.odt. It should have a title identified by a specific style.

You may run the test in /tmp with an environment variable DOC2JSON_INSTALL_DIR that refers to the installation directory of doc2json. This is the build directory for Linux users. You can type

export DOC2JSON_INSTALL_DIR=$PWD

in the build directory before trying the test. This environment variable is automatically set for docker users. But you need copy your document into the /tmp directory of the docker container with the command

docker cp .../file.docx doc2json:/tmp

You need to look at the name of the styles the headings of your document.
By default, for a french word document, these styles are Titre, Titre1, Titre2, Titre3, ... For an english word document, they may be Title, Heading1, Heading2, Heading3, ... You need to provide these styles to doc2json by modifying the configuration file config-text-word.xml and by replacing the french Titre by the styles that appear in the style ribbon of Word.

cp $DOC2JSON_INSTALL_DIR/share/doc2json/config-text-word.xml /tmp # edit config-text-word.xml and replace Titre1 by Heading1 or by the styles of the different sections of your document

If the title is not recognized, doc2json will answer "bad generation of the output format!" with a corrupted output json document. This should be modified for a future version.

For the opendocument format, you need to find the style ribbon by clicking on the parameter menu (top right) of libreoffice. Then styles like "Heading 1" should be replaced by "Heading_20_1" in config-text-openoffice.xml - spaces are replaced by "_20_". Sometimes libreoffice renames these styles internally. For instance, it may rename the "Title" style with "P1" or with "P2". The parsing algorithm is not smart enough to recognize this renaming - it will be in a future version. So if the extraction fails, you can unzip your file.odt, and then edit the file content.xml to look for the text of your "Title" and see what is the associated style. Do not spend too much time on that point if it fails.

Then the adequate command

> cd /tmp > $DOC2JSON_INSTALL_DIR/bin/doc2json $DOC2JSON_INSTALL_DIR/share/doc2json/libTextWordReader.so -c /tmp/config-text-word.xml file.docx -o file.json > $DOC2JSON_INSTALL_DIR/bin/doc2json $DOC2JSON_INSTALL_DIR/share/doc2json/libTextOpenofficeReader.so -c /tmp/config-text-openoffice.xml file.odt -o file.json

should extract in file.json the sections and the text of your document.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ Back

TRUSTS-Logo.png

TRUSTS

Trusted Secure Data Sharing Space

▼ Click for campaign details and rewards

TRUSTS will ensure trust in the concept of data markets as a whole via its focus on developing a platform based on the experience of two large national projects, while allowing the integration and adoption of future platforms by means of interoperability. The TRUSTS platform will act independently and as a platform federator, while investigating the legal and ethical aspects that apply on the entire data valorization chain, from data providers to consumers, i.e., it will:

- set up a fully operational and GDPR-compliant European Data Marketplace for personal related and non-personal related data targeting individual and industrial use by leveraging existing data marketplaces (Industrial Data Space, Data Market Austria) and enriching them with new functionalities and services to scale out.

- demonstrate and realise the potential of the TRUSTS Platform in 3 use cases targeting the industry sectors of  corporate  business  data  in  the  financial  and  operator  industries  while  ensuring  it  is supported by a viable, compliant and impactful governance, legal and business model.

 

TRUSTS requirements elicitation

Starts on:

24/09/2021

Ends on:

31/12/2021

Estimated Test Duration:

20 mins

Target beta testers profile:

Business users, Developers

Beta tester level:

Intermediate, Advanced

Campaign objectives

The TRUSTS consortium aims at receiving responses to the requirements elicitation questionnaire and interviewing industrial, academia and regulatory domain experts in order to lead the TRUST data marketplace specification. Your responses will help us to evaluate the functionality, services and operational capacity of such an endeavour and to establish its operation.

Requirements for this campaign

In this questionnaire you will we asked about the data sharing processes in your organization, therefore it aims at people that are having the need to exchange or trade data in your organization.

Beta test instructions and scenario

Just follow the link to the questionnaire.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ Back

JMLGen_Logo_RGB.svg

DECODER - JmlGen

DEveloper COmpanion for Documented and annotatEd code Reference

▼ Click for campaign details and rewards

DECODER builds an Integrated Development Environment (IDE) that combines information from different sources through formal and semi-formal models to deliver software project intelligence to shorten the learning curve of software programmers and maintainers and increase their productivity.  Developers will deliver high quality code that are more secure and better aligned with requirements and maintainers will immediately know what has been done, how and with what tools.

 

JmlGen

Starts on:

09/04/2021

Ends on:

31/12/2021

Estimated Test Duration:

1 hour

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Developers

Beta tester level:

Beginner

Campaign objectives

JmlGen generates JML annotations from what can be guessed out of a java project: the result, a JML-annotated project, can then be processed by JML tools, like the OpenJml program verification tool.
Check the current functions offered and try them on your own java software.

Requirements for this campaign

JmlGen takes as input a java project (the project root directory, like the one out of a "git clone"), and generates JML in java files located in a specified destination directory.

To install and build JmlGen, you will need:
- A java environment (java11 minimum)
- Maven

The test described below is for Linux, but should be adaptable to other platforms (at least, paths in configuration files would probably have to be adapted).

Beta test instructions and scenario

Install JmlGen

JmlGen is located at https://gitlab.ow2.org/decoder/jmlgen .

To download and build it:

$ git clone https://gitlab.ow2.org/decoder/jmlgen.git
$ cd jmlgen
$ mvn clean install

Minimal test (to check JmlGen works)

You can then run a test example, as follows:

$ java -jar target/jmlgen-0.0.1-SNAPSHOT.jar src/main/resources/jmlgen.properties

Some java code with JML inside should be produced in /tmp/jmlgen (the original java code is in src/test/java/eu/decoder/sample_jmlgen/ directory).
To see it, for example:

$ cat /tmp/jmlgen/src/test/java/eu/decoder/sample_jmlgen/Sample*

(note that you may customize the output directory by editing src/main/resources/jmlgen.properties, and changing the value of "target" property).

Apply JmlGen to a real project

We will take as example the OW2 sat4j project (https://gitlab.ow2.org/sat4j).

You may run the test in /tmp:

$ cd /tmp
$ git clone https://gitlab.ow2.org/sat4j/sat4j.git

Now create a JmlGen configuration file (let's say, /tmp/sat4j.properties), with the following content (you may copy/paste it):

root: /tmp/sat4j
destination: /tmp/sat4j-JML
sourcepath: org.sat4j.br4cp/src/main/java:org.sat4j.core/src/main/java:org.sat4j.intervalorders/src/main/java:org.sat4j.maxsat/src/main/java:org.sat4j.pb/src/main/java:org.sat4j.sat/src/main/java:org.sat4j.sudoku/src/main/java

(Note: sourcepath lists all source folders, separated with colons; in many java projects, it would simply be set to "src/main/java").

Go back to the directory where you installed JmlGen, and run it:

$ java -jar target/jmlgen-0.0.1-SNAPSHOT.jar /tmp/sat4j.properties

You should see logs in the console, that detail where JML annotations have been inserted: go in some corresponding files (under /tmp/sat4j-JML) to discover JML annotations.

For example, the following command should display some JML annotations inserted in the SuDoku.java sample of Sat4j:

$ cat /tmp/sat4j-JML/org.sat4j.sudoku/src/main/java/org/sat4j/apps/sudoku/SuDoku.java | grep "/*@"

Note some annotations can be of immediate interest: for example, "non_null" annotations reflect that a method result should be tested to null, as the method was called without check (for a call like "method1().method2()", JmlGen would annotate "method1()" as "non_null", which denotes a risk of null pointer exception). A plain text search for "non_null" annotations, without any analysis tool, can be profitable.

Apply JmlGen to your own project

Now you are ready to use JmlGen on your own! And report bugs/issues at https://gitlab.ow2.org/decoder/jmlgen/-/issues.

When done, use any 3rd-party JML tool (like OpenJml) to perform analysis of your java code, now instrumented by JmlGen.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ Back

Recent Completed Campaigns

smooth.png

SMOOTH

Assisting Micro Enterprises to adopt and be compliant with GDPR

▼ Click for campaign details and rewards

SMOOTH project assists Micro enterprises to adopt and be compliant with the General Data Protection Regulation (GDPR) by designing and implementing easy-to-use and affordable tools to generate awareness on their GDPR obligations and analysing their level of compliance with the new data protection regulation.

 

SMOOTH Market Pilot

Estimated Test Duration:

20-35min

Incentives

1) A free GDPR compliance report including a series of recommendations to improve your company’s compliance with the GDPR.
  
2) Be compliant, avoids potential fines. The lack of awareness, expertise and resources make small enterprises the most vulnerable institutions towards a strict enforcement of the GDPR. 

3) Build up your brand reputation with clients and network by showing you have adequate solutions in place to protect their data.

Also, Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Lottery, and 24 randomly chosen Beta Testers will be awarded a money prize in recognition.

Target beta testers profile:

Business users

Beta tester level:

Beginner

Campaign objectives

The objectives of this campaign for the SMOOTH project is to reach out to 500 micro-enterprises to complete the market pilot. 

Requirements for this campaign

Micro-enterprises (employ fewer than 10 persons and whose annual turnover and/or annual balance sheet total does not exceed EUR 2 million) 

or Small (SME): Enterprises that employ fewer than 50 persons and whose annual turnover and/or annual balance sheet total does not exceed EUR 10 million, excluding enterprises that qualify as micro-enterprises.

Beta test instructions and scenario

Please read carefully these instructions before completing the Questionnaires.

To connect to the SMOOTH platform and perform the test, please use this link.

Campaign Mailing List

▲ Back

TRIPLE_LOGO-Title_FINAL-transparentbackground.png

TRIPLE

The GoTriple platform is an innovative multilingual and multicultural discovery solution for the social sciences and humanities (SSH).

▼ Click for campaign details and rewards

TRIPLE stands for Transforming Research through Innovative Practices for Linked Interdisciplinary Exploration. The GoTriple platform will provide a single access point that allows you to explore, find, access and reuse materials such as literature, data, projects and researcher profiles at European scale.
It is based on the Isidore search engine developed by Huma-Num (unit of CNRS).
A prototype will be released in autumn 2021.
It will be one of the dedicated services of OPERAS, the research infrastructure supporting open scholarly communication in the social sciences and humanities in the European Research Area.

 

GoTriple Beta Testing

Estimated Test Duration:

Around 30 minutes

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner, Intermediate, Advanced

Campaign objectives

The Triple Project has released a beta version of its Discovery platform GoTriple, with an initial set of features. More features will be added in the coming months, till March 2022. The aim of this campaign is to test the beta platform, pick up any usability issues and improve the platform so that when we release the final version, it will meet the needs of the end-users. 

Requirements for this campaign

Ideally, you will have a background in Social Science and Humanities and have knowledge of searching for information and research material, such as scientific publications.

Beta test instructions and scenario

The beta version of the software can be accessed at https://www.gotriple.eu  

Instructions:

Test 1: Goal to find which authors published the most on a topic

  1. Go to the GoTriple Beta testing platform via the above web address
  2. Enter a search term of your choice 
  3. Browse the results of the search
  4. Select the 'Visual' View of results 
  5. Explore the visual view elements
  6. Refine the results to show papers from just one of the disciplines provided 
  7. Clear the refinement to show results from all disciplines again
  8. Find which authors published the most on this topic
  9. Click on an author name to view other publications from this author.

Test 2: Goal to produce a Knowledge Map and examine it

  1. Make a new search on 'Society + Covid'
  2. Refine the results to show only papers published in 2020 
  3. Clear the 2020 selection
  4. Find book chapters published on this topic (same search Society + Covid)
  5. Clear the book chapter selection to return to the overall search list
  6. Create a Knowledge Map for this search (be patient it takes a bit of time!) 
  7. Examine the knowledge map and see the grouped publications 
  8. Return to Home Page

Test 3 Goal: Examine Disciplines and produce a Streamgraph

  1. Examine the Disciplines Tab - try clicking on any that are of interest to you
  2. View a list of publications from a discipline 
  3. Use the filter to refine the results shown
  4. Return to the Home page
  5. Make a new search on the term 'Co-design'
  6. View the Streamgraph for this search   
  7. Examine the results of the Streamgraph
  8. Visit the GOTRIPLE tab to view project information 

Campaign Mailing List

▲ Back

DataBench-toolbox-icon.png

DataBench Toolbox

Based on existing efforts in big data benchmarking, the DataBench Toolbox provides a unique environment to search, select and deploy big data benchmarking tools and knowledge about benchmarking

▼ Click for campaign details and rewards

At the heart of DataBench is the goal to design a benchmarking process helping European organizations developing BDT to reach for excellence and constantly improve their performance, by measuring their technology development activity against parameters of high business relevance.

DataBench will investigate existing Big Data benchmarking tools and projects, identify the main gaps and provide a robust set of metrics to compare technical results coming from those tools.

Project website:

 

Generation of architectural Pipelines-Blueprints

Estimated Test Duration:

30 minutes plus mapping to blueprints that requires desk analysis

Incentives

As a recognition for your efforts and useful feedback, you will be added as a DataBench contributor within our Website, your blueprint published, and the authorship of your contribution acknowledged in the Toolbox. This offer is limited to the beta testers interacting with the team, by 15 December 2020. You will be contacted individually for contribution opportunities. Please, provide a valid contact email during the survey phase and in the form for suggestions of new blueprints.

Also, Beta Testers will be offered to be added to the ReachOut Hall of fame, will take part in the ReachOut Lottery, and 16 randomly selected beta testers providing a fully returned questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Developers

Beta tester level:

Advanced

Campaign objectives

DataBench has released the DataBench Toolbox, a one-stop shop for big data and AI benchmarking. It offers a catalogue of existing benchmarking tools and information about technical and business benchmarking. 

This campaign (extended until the end of January 2021) aims at getting content in the form of new architectural big data/AI blueprints mapped to the BDV reference model and the DataBench pipeline/blueprint. In this campaign we focus mainly on advanced users that would like to contribute with practical examples of mapping their architectures to the generic blueprints. The results will be published in the DataBench Toolbox acknowledging the ownership and can be used by the owners for their own purposes in their projects/organizations to claim their efforts in mapping with existing standardization efforts in the community.

Note that we provide information about the BDV Reference Model, the four steps of the DataBench Generic data pipeline  (data acquisition, preparation, analysis and visualization/interaction), and the generic big data blueprint  devised in DataBench, as well as some examples and best practices to provide the mappings . Testers should study the available DataBench information and guidelines. Then using the provided steps testers should prepare their own mappings, resulting diagrams and explanations, if any. The Toolbox provides a web form interface to upload all relevant materials that will be later assessed by an editorial board in DataBench before the final publication in the Toolbox.

Requirements for this campaign

- Having a big data/AI architecture in place in your project/organization
- Willingness to provide mappings from your architecture to be part of the DataBench pipeline/blueprints
- Basic Knowledge of web browsing
- Internet connection
- Use preferably Google Chrome

For any inquiry regarding to this campaign, please write an email to databenchtoolbox@gmail.com.

Beta test instructions and scenario

The Toolbox is accessible without the need to log in to the system, but the options are limited to pure search. You can see that without registering the options in the menu are very few. To perform this campaign, we would like all involved users to first sign in into the DataBench Toolbox to get a user profile that you will use throughout the campaign:

- Go to https://databench.ijs.si/ and click on “Sign up” option located at the top of the page on the right side.

- Fill in the form to generate your new user by providing a username and password of your choice, your organization, email, and your user type (at least Technical for this exercise).

Once you have created your user, please sign in with it to the Toolbox. You will be directed to the Toolbox main page again, where you could see that you have more options available. 

Besides the options available through the menu, the main page provides:
A) a carrousel with links,
B) User journeys for users of different profiles: Technical, Business and Benchmarks providers,
C) Videos aimed at these 3 types of users explaining briefly the main functionalities offered for each of them,
D) Shortcuts to some of the of the functionalities, such as FAQ, access to the benchmarks or knowledge catalogues, the DataBench Observatory, etc. 

A) Get information about DataBench pipelines and blueprints

This campaign aims at providing you the means to search and browse existing data pipelines and the explanations on how to map your own architecture to efforts such as the BDV Reference model, the DataBench Framework and the mappings with existing initiatives. 

We encourage you to first go to the Technical user journey  accessible from the front-page of the Toolbox, read it and follow the links given to you to get acquainted with the entries related to blueprints and pipelines. In the “Advanced” user journey you will find the following:

- Link to the DataBench Framework and it relation to the BDV Reference Model, where you can find an introduction to the different elements that composes the DataBench approach towards technical benchmarking.

- Link to the DataBench Generic Pipeline , where an explanation of the 4 main steps in data pipelines are explained. These 4 steps are the basic building blocks for the mappings to other blueprints and existing initiatives.

- User Journey - Generic Big Data Analytics Blueprint : This is the main piece of information that you need to understand what we mean by mapping an existing architecture to our pipelines and blueprints. You will find links to the generic pipeline figure.

- Practical example of creating a blueprint and derived cost-effectiveness analysis: Targeting the Telecommunications Industry .

- Ways to report your suggestions for new blueprints, by using the Suggest blueprint/pipeline option  under the Knowledge Nuggets menu 

Below is a summary of the minimal set of actions we encourage you to do:

  1. Go to the User journeys area of the main page and click on “Technical”.

    2. Go to the link to the User Journey: Generic Big Data Analytics Blueprint  at the bottom of the “Advanced” area of the page. 

3. Read and understand the different elements of the pipeline (the 4 steps) and the elements of the generic blueprint as described in the previous link.

4. Check examples of already existing blueprints. In order to do that use the search box located at the top right corner and type “blueprint”. Browse through the blueprints. 

B) Desk analysis

Once you are familiar with the DataBench Toolbox and the main concepts related to the blueprints, you need to do some homework. You should try to map your own architecture to the DataBench pipeline and the generic blueprint. We suggest the following steps:

- Prepare a figure with the architecture you have in mind in your project/organization. 

- Create links to the 4 steps of the data pipeline and generate a new figure showing the mapping.

- Create links to the Generic Big Data Analytics Blueprint  figure and generate a new figure showing the mappings. In order to do so you might use the generic pipeline figure and particularize to your components as it was done in the example provided for the Telecommunications Industry 

C) Upload your blueprint to the Toolbox

- Upload your files as pdf or images by using the Form of suggestion of blueprints   available from the Knowledge Nuggets menu. Try to include a description with a few words about the sector of application of your blueprint, main technical decisions or anything you might find interesting to share. 

- The DataBench project will revise the blueprints and publish them into the platform acknowledging your authorship. 

Congratulations! You have completed the assignment of this campaign! Go now to fill in the feedback questionnaire. Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

 

Finding the right benchmarks for technical and business users

Estimated Test Duration:

30 to 40 minutes

Incentives

As a recognition for your efforts and useful feedback, you will be added as a DataBench contributor within our Website. This offer is limited to the beta testers interacting with the team, by 6 December 2020. You will be contacted individually for contribution opportunities. Please, provide a valid contact email during the survey phase.

Also, Beta Testers will be offered to be added to the ReachOut Hall of fame, will take part in the ReachOut Lottery, and 16 randomly selected beta testers providing a fully returned questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users, Developers

Beta tester level:

Intermediate

Campaign objectives

DataBench has released the DataBench Toolbox, a one-stop shop for big data and AI benchmarking. It offers a catalogue of existing benchmarking tools and information about technical and business benchmarking. 

This campaign aims at getting feedback of the usage of the Tool and the user interface of the web front-end of the Toolbox. The Toolbox provides a set of user journeys, or suggestions, for three kind of users: 1) Technical user (people interested in technical benchmarking), 2) Business users (interested in finding facts, tools, examples and solutions to make business choices), and 3) Benchmark providers (users from benchmarking communities or that generated their own benchmarks). In this campaign we focus mainly on technical and business users. We provide some minimal instructions for these two types of users to understand if finding information in the Toolbox is not a cumbersome process and getting your feedback. The idea is to use the user journeys drafted in the Toolbox to drive this search process and understand if users find this information enough to kick-start the process of finding the right benchmark and knowledge they were looking for.

Requirements for this campaign

- Previous knowledge about Big Data or AI
- Basic Knowledge of web browsing
- Internet connection
- Use preferably Google Chrome

For any inquiry regarding a this campaign, please write an email to databenchtoolbox@gmail.com.

Beta test instructions and scenario

The Toolbox is accessible without the need to log in to the system, but the options are limited to pure search. You can see that without registering the options in the menu are very few. 

Initial steps to log in as a Toolbox user

To perform this campaign, we would like all involved users to first sign in into the DataBench Toolbox and create  a user profile that you will use throughout the campaign:

- Go to http://databench.ijs.si/ and click on “Sign up” option located at the top of the page on the right side.
- Fill in the form to generate your new user by providing an username and password of your choice, your organization, email, and your user type (Technical and/or Business, depending on your preferences and skills).

Once you have created your user, please sign in with it into the Toolbox. You will be directed to the Toolbox main page again, where you could check that you have more options available. 

Besides the options available through the menu, the main page provides:
A) a carrousel with links,
B) User journeys for users of different profiles: Technical, Business and Benchmarks providers,
C) Videos aimed at these 3 types of users explaining briefly the main functionalities offered for each of them,
D) Shortcuts to some of the functionalities, such as FAQ, access to the benchmarks or knowledge catalogues, the DataBench Observatory, etc. 

A) For Technical Users

This campaign aims at using the user journeys as starting point to help you navigating the tool. We encourage you to click on the Technical user journey, read it and follow the provided links to get acquainted with the tool and what you can do with it. Get used to the main two catalogues: the benchmarks catalogue (tools for big data and AI benchmarking), and the knowledge nuggets catalogue (providing information about technical and business aspects related to benchmarking and big data technologies). Learn about existing big data architectural blueprints and browse some of them. 

Additionally, if you already have a goal in your mind (i.e. finding a benchmark for testing a specific ML model, or compare the characteristics of different NoSQL databases), we encourage you to try to find the appropriate benchmark and report your conclusions later in the questionnaire. 

Below is a summary of the minimal set of actions we encourage you to do:

  1. Go to the User journeys area of the main page and click on “Technical”. 

2. Read the content of this page, divided into advice for “Beginners” (first-time users) and “Advanced” (providing extra recommendations of what to do next). Focus first on the “Beginners” area and click on the different links to browse to the different options to get used to the tool. We recommend you to come back to the User journey page  until you click on all the available options for beginners, but feel free to stray and use the navigation and links from other pages to get used to the tool. After you finish clicking on all the options for beginners, you should have seen the benchmarks and knowledge nuggets catalogues, used some of the search functionalities and browsed some of the existing architectural blueprints. You are now ready to go further!

3. Focus now on the “Advanced” area of the User journey page 

- Here you will find ways to suggest new content via web forms (i.e. new benchmarks you might know that are missing in the catalogue, a version of a big data blueprint you are dealing with in a project, or a new knowledge nugget based on your experience). We are not expecting you to fill-in these forms at this stage, but just acknowledge their potential value (and feel free to contribute any time).  

- You will find also links to specific more advanced user journeys or practical examples at the end of the advanced user journeys. Click on the ones that take your attention and start navigating via the links offered by them. From this moment we expect that you know the main options of the Toolbox and how to navigate and browse through it. You should have noted by now that both benchmarks and knowledge nuggets are annotated or categorized with clickable tags, which makes navigation through related items possible.  

4. Get used to the search functionalities. The Toolbox offers 4 types of search:
- Search text box located at the top right corner of the pages. This is a full text search. You can enter any text and the results matching that text from both the benchmark and knowledge nuggets catalogues will appear.

- Search by “BDV Reference Model”  option from the menu allows you to have a look at the model created by the BDV PPP community (check the BDV SRIA  for more details). The model is represented graphically and is clickable. If you click in any of the vertical or horizontal layers of the model you will be directed to the benchmarks and/or knowledge annotated in the Toolbox to these layers. Browse through this search.

- Search by “Guided benchmark search” . In simple terms this is a search by the tags used to annotate benchmarks and knowledge nuggets. These tags range from technical to business aspects. You can click on the categories of tags to find related information. Browse to some of the options of this search.  

- Finally, the “Search by Blueprint/Pipeline”  option allows a search that presents graphically a generic architectural blueprint developed in DataBench with the most common elements of a big data architecture. The blueprint is aligned with 4 steps of a DataBench Generic data pipeline  (data acquisition, preparation, analysis and visualization/interaction). The graphic is clickable both at the level of the four steps of the pipeline and in some of the detailed elements of the blueprint. Click on the parts of the diagram you are interested to find a list of existing benchmarks and nuggets related to it. Browse some of them. There are nuggets that show a summary of existing big data tools for each of the elements of the pipeline. See if you find it easy to browse through the results .

Congratulations! You have completed the assignment of this campaign! Go now to fill in the feedback questionnaire. 

NOTE – Some of the available benchmarks can be deployed and run in your premises. Those are listed first in the Benchmark catalogue and when you click on them you will find the configuration file at the bottom of their description. If you want to run any of them, you should have dedicated infrastructure to do so. We are not expecting you to do so in this exercise.

B) For Business users

As for technical users, this campaign aims at using the user journeys as starting point to help you navigating the tool. We encourage you to click on the Business user journey, read it and follow the links given to you to get acquainted with the tool and what you can do with it. Get used to the main two catalogues: the benchmarks catalogue (tools for big data and AI benchmarking), but mainly to the knowledge nuggets catalogue (providing information about technical and business aspects related to benchmarking and big data technologies). Learn about existing big data architectural blueprints and browse to some of them, as they apply to different industries and might be of interest for business purposes.

Additionally, if you already have a goal in your mind (i.e. finding most widely used business KPIs in a specific sector), we encourage you to try to find the appropriate information in the knowledge nugget catalogue and report your conclusions later in the questionnaire. 

Below there is a summary of the minimal set of actions we encourage you to do:

  1. Go to the User journeys area of the main page and click on “Business”. 

2. Read the content of this page, divided into advice for “Beginners” (first-time users) and “Advanced” (providing extra recommendations for what to do next). Focus first on the “Beginners” area and click on the different links to browse to the different options to get used to the tool. We recommend you to come back to this User journey page  until you click on all the available options for beginners, but feel free to stray and use the navigation and links from other pages to get used to the tool. After finishing clicking on all the options for beginners, you should have seen the benchmarks and knowledge nuggets catalogues, used some of the search functionalities and browsed some of the existing architectural blueprints. You are now ready to go further!

3. Focus now on the “Advanced” area of the User journey page .
- You will find links to different elements, such as nuggets related to business KPIs, by industry, etc. Browse through them and follow the links.

- You will find ways to suggest new content via web forms (i.e. a new knowledge nugget based on your experience). We are not expecting you to fill-in these forms at this stage, but just acknowledge their potential value (but feel free to contribute any time).  

- You will find also links to specific more advanced user journeys or practical examples at the end of the advanced user journeys. Click on the ones that take your attention and start navigating via the links offered by them. From this moment we expect that you know the main options of the Toolbox and how to navigate and browse through it. You should have noted by now that both benchmarks and knowledge nuggets are annotated or categorized with clickable tags, which makes navigation through related items possible.  

5. Get used to the search functionalities. The Toolbox offers 4 types of search:
- Search text box located at the top right corner of the pages. This is a full text search. You can enter any text and the results, matching that text from both the benchmark and knowledge nuggets catalogues, will appear.

- Search by “BDV Reference Model”  option from the menu allows you to have a look at the model created by the BDV PPP community (check the BDV SRIA  for more details). The model is represented graphically and is clickable. If you click on any of the vertical or horizontal layers of the model you will be directed to the benchmarks and/or knowledge annotated in the Toolbox to these layers. Browse through this search.

- Search by “Guided benchmark search” . In simple terms this is a search by the tags used to annotate benchmarks and knowledge nuggets. These tags range from technical to business aspects. You can click on the categories of tags to find related information. Browse to some of the options of this search.  

Finally, the “Search by Blueprint/Pipeline”  option allows a search that presents graphically a generic architectural blueprint developed in DataBench with the most common elements of a big data architecture. The blueprint is aligned with 4 steps of a DataBench Generic data pipeline  (data acquisition, preparation, analysis and visualization/interaction). The graphic is clickable both at the level of the four steps of the pipeline and in some of the detailed elements of the blueprint. Click on the parts of the diagram you are interested to find a list of existing benchmarks and nuggets related to it. Browse some of them. There are nuggets that show a summary of existing big data tools for each of the elements of the pipeline. See if you find them it easy to browse through the results .
6. This part of the test is not guided, as we expect you to navigate through the options you have seen previously. Once you know how to navigate, try to find information of interest for your industry or area of interest:
• Try to find information about the most widely used KPIs or interesting use cases.
• Try to find information about architectural blueprints for your inspiration.  

Congratulations! You have completed the assignment of this campaign! Go now to fill in the feedback questionnaire.

Campaign Mailing List

▲ Back

STAMP_Logo_RGB_small.svg

STAMP

Software Testing AMPlification for the DevOps Team

▼ Click for campaign details and rewards

STAMP stands for Software Testing AMPlification. Leveraging advanced research in automatic test generation, STAMP aims at pushing automation in DevOps one step further through innovative methods of test amplification. 

STAMP reuses existing assets (test cases, API descriptions, dependency models), in order to generate more test cases and test configurations each time the application is updated. Acting at all steps of development cycle, STAMP techniques aim at reducing the number and cost of regression bugs at unit level, configuration level and production stage.

STAMP raises confidence and foster adoption of DevOps by the European IT industry. The project gathers four academic partners with strong software testing expertise, five software companies (in: e-Health, Content Management, Smart Cities and Public Administration), and an open source consortium. This industry-near research addresses concrete, business-oriented objectives.

 

Try the STAMP toolset

Estimated Test Duration:

2 hours

Incentives

You'll have nothing to lose and everything to win, including time and quality in your software releases!
Moreover, you'll be among the first to experiment the most advanced Java software testing tools.

And, as a recognition for your efforts and useful feedback, you will receive a limited edition “STAMP Software Test Pilot” gift and be added as a STAMP contributor. This offer is limited to the beta testers interacting with the team, by 30 October 2019. You will be contacted individually for a customized gift and for contribution opportunities. Please, provide a valid contact email.

Target beta testers profile:

Developers

Beta tester level:

Beginner

Campaign objectives

Trying the open source toolset is a free initiative that will amplify your testing efforts automatically. Experiment DSpot, Descartes, CAMP or Botsing now.

Requirements for this campaign

Download and try DSpot or Descartes or CAMP or Botsing.

Beta test instructions and scenario

Campaign Mailing List

▲ Back

logo_wide.png

Energyshield - Security Culture Assessment tool

EnergyShield is a complete state-of-the-art security toolkit for the EPES sector

▼ Click for campaign details and rewards

EnergyShield captures the needs of Electrical Power and Energy System (EPES) operators and combines the latest technologies for vulnerability assessment, supervision and protection to draft a defensive toolkit.Adapt and improve available building tools (assessment, monitoring & protection, remediation) in order to support the needs of the EPES sector.Integrate the improved cybersecurity tools in a holistic solution with assessment, monitoring/protection and learning/sharing capabilities that work synergistically.Validate the practical value of the EnergyShield toolkit in demonstrations involving EPES stakeholders.Develop best practices, guidelines and methodologies supporting the deployment of the solution and encourage widespread adoption of the project results in the EPSE sector.

 

Energyshield SBAM Tool

Estimated Test Duration:

between 20 minutes to 30

Incentives

Beta testers will be acknowledged within our website

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner

Campaign objectives

Energyshield has created a first version of the security culture assessment tool. We would like to beta test this first version

Requirements for this campaign

No requirements except internet connection and browser - all browser types and devices are acceptable

Beta test instructions and scenario

For the beta testing campaign - create a user group in the tool, create a campaign, answer a questionnaire and review the results of the assessment. The URL of the website is (http://energyshield.epu.ntua.gr/) . Information and guide of the platform is included here: (https://1drv.ms/w/s!Avx-hU-EvNxviEse2KU6hPqEoY4O?e=Hn5byP)

Campaign Mailing List

▲ Back

safe-deed-logo-color.png

Safe-DEED

A competitive Europe where individuals and companies are fully aware of the value of the data they possess and can feel safe to use it.

▼ Click for campaign details and rewards

Safe-DEED (Safe Data-Enabled Economic Development) brings together partners from cryptography, data science, business innovation, and legal domain to focus on improving security technologies, improving trust as well as on the diffusion of privacy enhancing technologies. Furthermore, as many companies have no data valuation process in place, Safe-DEED provides a set of tools to facilitate the assessment of data value, thus incentivizing data owners to make use of the scalable cryptographic protocols developed in Safe-DEED to create value for their companies and their clients.

Project website:

 

Personal Data Demonstrator

Estimated Test Duration:

1 hour

Reward for this campaign:  

30€

Incentives

Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Super Prize, and 24 randomly chosen Beta Testers with a fully completed questionnaire will be awarded a money prize in recognition.

Target beta testers profile:

Business users

Beta tester level:

Beginner, Intermediate, Advanced

Campaign objectives

The Safe-DEED project would like to evaluate the completeness of the proposed demonstrator in terms of business application, value and roles.

Requirements for this campaign

You can access the demonstrator using any Web browser.

Beta test instructions and scenario

Please follow the https://demo.safe-deed.eu/ link and evaluate all subordinate applications. Instructions can be found in embedded videos in the main page of the demonstrator as well as in the applications' pages. Additional explanations are also provided when appropriate.

Campaign Mailing List

▲ Back


more completed campaigns

Latest Upcoming Campaigns

ENSURESEC

ENSURESEC addresses the whole gamut of modern e‑co...

DECODER Framework

DECODER is an IDE that helps developers to increas...

Passport

...

ENSURESEC

...

HiDALGO

Center of Excellence developing novel methods, alg...

 

The Beta-Testing Campaign Platform for Research Projects.

What is ReachOut main objective? ReachOut helps H2020 projects in the area of software technologies to develop beta-testing campaigns for their software. ReachOut helps build bridges between projects and their markets. ReachOut provides projects with end-to-end support to develop and launch beta-testing campaigns so as to enable them to concretely engage with their potential users and develop their ecosystems. 


whatisit.svg

What is Beta-Testing?

Beta testing is intended to collect feedback from customers on a pre-release product to improve its quality. This is the last stage before shipping a product. Not only it helps finalize a product, it is also a marketing tactic that helps develop a base of early adopters. 

news.svg

News and Events


 


community_icon.svg

Community

Be part of the growing ReachOut community. Subscribe here to receive new campaigns, best practices, and recommendations.

envelope_icon.svg

Contact Us

Do not hesitate to write to us directly for any other questions, proposals or partnership enquiries.



Partner Projects

Beneficiaries of H2020 cascade funding projects are welcome to join ReachOut. More.

  • EDI-final
  • NGI_Ledger
  • NGI_Pointer
  • NGI_DAPSI_Tag-color-positive.jpg

    

The Reachout project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement number 825307.

flagEU.svgThe information in this document is provided “as is”, and no guarantee or warranty is given that the information is fit for any particular purpose. The content of this document reflects only the author`s view – the European Commission is not responsible for any use that may be made of the information it contains. The users use the information at their sole risk and liability.

This wiki is licensed under a Creative Commons 4.0 license
XWiki Enterprise 12.10.7 - Documentation