22 November
Databench Toolbox Campaign is Open
Benchmark your big data and AI architecture: try the Databench toolbox now!
16 November
Smooth GDPR Campaign is Open
Check your GDPR compliance using the Smooth platform.
13 November
SFScon 2020, November 13-14, Bolzano, Italy
OW2 has been invited by SFScon 2020 to organize an OW2 presentation track. ...
05 November
Articonf Decentralized Social Media
Try the Crowd Journalism platform on your PC or phone.
19 October
GeoTriples Spark Campaign is Open
Try GeoTriples Spark beta to transform big geospatial data into RDF graphs.

ReachOut for Project Leaders

You are a project leader?

Set-up a beta-testing campaign
 for your project!

  • Register your project
  • Arrange a training session
  • Promote the campaign
  • Learn from feedback

Improve your software
Align with market expectations

ReachOut for Beta Testers

You are a beta tester?

Check out Existing Campaigns

Participate in research project
 beta-testing campaigns!

  • Choose your beta-testing job
  • Execute the tutorial
  • Answer feedback questions
  • Pick-up your reward

Look inside state-of-the-art software
Enhance your professional network


Check out these campaigns

smooth.png

SMOOTH

Assisting Micro Enterprises to adopt and be compliant with GDPR

▼ campaigns

SMOOTH project assists Micro enterprises to adopt and be compliant with the General Data Protection Regulation (GDPR) by designing and implementing easy-to-use and affordable tools to generate awareness on their GDPR obligations and analysing their level of compliance with the new data protection regulation.

 

SMOOTH Market Pilot

Starts on:

15/11/2020

Ends on:

31/12/2020

Estimated Test Duration:

30-45min

Target beta testers profile:

Business users

Beta tester level:

Beginner

Campaign objectives

The objectives of this campaign for the SMOOTH project is to reach out to 500 micro-enterprises to complete the market pilot. 

Requirements for this campaign

Micro-enterprises (employ fewer than 10 persons and whose annual turnover and/or annual balance sheet total does not exceed EUR 2 million) 

or Small (SME): Enterprises that employ fewer than 50 persons and whose annual turnover and/or annual balance sheet total does not exceed EUR 10 million, excluding enterprises that qualify as micro-enterprises.

Beta test instructions and scenario

Please read carefully these instructions before completing the Questionnaires.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Incentives

1) A free GDPR compliance report including a series of recommendations to improve your company’s compliance with the GDPR.
  
2) Be compliant, avoids potential fines. The lack of awareness, expertise and resources make small enterprises the most vulnerable institutions towards a strict enforcement of the GDPR. 

3) Build up your brand reputation with clients and network by showing you have adequate solutions in place to protect their data.

Also, Beta Testers will be offered to be added to the ReachOut "Hall of Fame", will automatically take part in the ReachOut Lottery, and 24 randomly chosen Beta Testers will be awarded a money prize in recognition.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ back

DataBench-toolbox-icon.png

DataBench Toolbox

Based on existing efforts in big data benchmarking, the DataBench Toolbox provides a unique environment to search, select and deploy big data benchmarking tools and knowledge about benchmarking

▼ campaigns

At the heart of DataBench is the goal to design a benchmarking process helping European organizations developing BDT to reach for excellence and constantly improve their performance, by measuring their technology development activity against parameters of high business relevance.

DataBench will investigate existing Big Data benchmarking tools and projects, identify the main gaps and provide a robust set of metrics to compare technical results coming from those tools.

Project website:

 

Generation of architectural Pipelines-Blueprints

Starts on:

22/11/2020

Ends on:

13/12/2020

Estimated Test Duration:

30 minutes plus mapping to blueprints that requires desk analysis

Target beta testers profile:

Developers

Beta tester level:

Advanced

Campaign objectives

DataBench has released the DataBench Toolbox, a one-stop shop for big data and AI benchmarking. It offers a catalogue of existing benchmarking tools and information about technical and business benchmarking. 

This campaign aims at getting content in the form of new architectural big data/AI blueprints mapped to the BDV reference model and the DataBench pipeline/blueprint. In this campaign we focus mainly on advanced users that would like to contribute with practical examples of mapping their architectures to the generic blueprints. The results will be published in the DataBench Toolbox acknowledging the ownership and can be used by the owners for their own purposes in their projects/organizations to claim their efforts in mapping with existing standardization efforts in the community.

Note that we provide information about the BDV Reference Model, the four steps of the DataBench Generic data pipeline  (data acquisition, preparation, analysis and visualization/interaction), and the generic big data blueprint  devised in DataBench, as well as some examples and best practices to provide the mappings . Testers should study the available DataBench information and guidelines. Then using the provided steps testers should prepare their own mappings, resulting diagrams and explanations, if any. The Toolbox provides a web form interface to upload all relevant materials that will be later assessed by an editorial board in DataBench before the final publication in the Toolbox.

Requirements for this campaign

- Having a big data/AI architecture in place in your project/organization
- Willingness to provide mappings from your architecture to be part of the DataBench pipeline/blueprints
- Basic Knowledge of web browsing
- Internet connection
- Use preferably Google Chrome

For any inquiry regarding to this campaign, please write an email to databenchtoolbox@gmail.com.

Beta test instructions and scenario

The Toolbox is accessible without the need to log in to the system, but the options are limited to pure search. You can see that without registering the options in the menu are very few. To perform this campaign, we would like all involved users to first sign in into the DataBench Toolbox to get a user profile that you will use throughout the campaign:

- Go to https://databench.ijs.si/ and click on “Sign up” option located at the top of the page on the right side.

- Fill in the form to generate your new user by providing a username and password of your choice, your organization, email, and your user type (at least Technical for this exercise).

Once you have created your user, please sign in with it to the Toolbox. You will be directed to the Toolbox main page again, where you could see that you have more options available. 

Besides the options available through the menu, the main page provides:
A) a carrousel with links,
B) User journeys for users of different profiles: Technical, Business and Benchmarks providers,
C) Videos aimed at these 3 types of users explaining briefly the main functionalities offered for each of them,
D) Shortcuts to some of the of the functionalities, such as FAQ, access to the benchmarks or knowledge catalogues, the DataBench Observatory, etc. 

A) Get information about DataBench pipelines and blueprints

This campaign aims at providing you the means to search and browse existing data pipelines and the explanations on how to map your own architecture to efforts such as the BDV Reference model, the DataBench Framework and the mappings with existing initiatives. 

We encourage you to first go to the Technical user journey  accessible from the front-page of the Toolbox, read it and follow the links given to you to get acquainted with the entries related to blueprints and pipelines. In the “Advanced” user journey you will find the following:

- Link to the DataBench Framework and it relation to the BDV Reference Model, where you can find an introduction to the different elements that composes the DataBench approach towards technical benchmarking.

- Link to the DataBench Generic Pipeline , where an explanation of the 4 main steps in data pipelines are explained. These 4 steps are the basic building blocks for the mappings to other blueprints and existing initiatives.

- User Journey - Generic Big Data Analytics Blueprint : This is the main piece of information that you need to understand what we mean by mapping an existing architecture to our pipelines and blueprints. You will find links to the generic pipeline figure.

- Practical example of creating a blueprint and derived cost-effectiveness analysis: Targeting the Telecommunications Industry .

- Ways to report your suggestions for new blueprints, by using the Suggest blueprint/pipeline option  under the Knowledge Nuggets menu 

Below is a summary of the minimal set of actions we encourage you to do:

  1. Go to the User journeys area of the main page and click on “Technical”.

    2. Go to the link to the User Journey: Generic Big Data Analytics Blueprint  at the bottom of the “Advanced” area of the page. 

3. Read and understand the different elements of the pipeline (the 4 steps) and the elements of the generic blueprint as described in the previous link.

4. Check examples of already existing blueprints. In order to do that use the search box located at the top right corner and type “blueprint”. Browse through the blueprints. 

B) Desk analysis

Once you are familiar with the DataBench Toolbox and the main concepts related to the blueprints, you need to do some homework. You should try to map your own architecture to the DataBench pipeline and the generic blueprint. We suggest the following steps:

- Prepare a figure with the architecture you have in mind in your project/organization. 

- Create links to the 4 steps of the data pipeline and generate a new figure showing the mapping.

- Create links to the Generic Big Data Analytics Blueprint  figure and generate a new figure showing the mappings. In order to do so you might use the generic pipeline figure and particularize to your components as it was done in the example provided for the Telecommunications Industry 

C) Upload your blueprint to the Toolbox

- Upload your files as pdf or images by using the Form of suggestion of blueprints   available from the Knowledge Nuggets menu. Try to include a description with a few words about the sector of application of your blueprint, main technical decisions or anything you might find interesting to share. 

- The DataBench project will revise the blueprints and publish them into the platform acknowledging your authorship. 

Congratulations! You have completed the assignment of this campaign! Go now to fill in the feedback questionnaire. Please note that filling in the questionnaire will be your ticket for incentives.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Incentives

As a recognition for your efforts and useful feedback, you will be added as a DataBench contributor within our Website, your blueprint published, and the authorship of your contribution acknowledged in the Toolbox. This offer is limited to the beta testers interacting with the team, by 15 December 2020. You will be contacted individually for contribution opportunities. Please, provide a valid contact email during the survey phase and in the form for suggestions of new blueprints.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

 

Finding the right benchmarks for technical and business users

Starts on:

22/11/2020

Ends on:

08/12/2020

Estimated Test Duration:

30 to 40 minutes

Target beta testers profile:

Business users, Developers

Beta tester level:

Intermediate

Campaign objectives

DataBench has released the DataBench Toolbox, a one-stop shop for big data and AI benchmarking. It offers a catalogue of existing benchmarking tools and information about technical and business benchmarking. 

This campaign aims at getting feedback of the usage of the Tool and the user interface of the web front-end of the Toolbox. The Toolbox provides a set of user journeys, or suggestions, for three kind of users: 1) Technical user (people interested in technical benchmarking), 2) Business users (interested in finding facts, tools, examples and solutions to make business choices), and 3) Benchmark providers (users from benchmarking communities or that generated their own benchmarks). In this campaign we focus mainly on technical and business users. We provide some minimal instructions for these two types of users to understand if finding information in the Toolbox is not a cumbersome process and getting your feedback. The idea is to use the user journeys drafted in the Toolbox to drive this search process and understand if users find this information enough to kick-start the process of finding the right benchmark and knowledge they were looking for.

Requirements for this campaign

- Previous knowledge about Big Data or AI
- Basic Knowledge of web browsing
- Internet connection
- Use preferably Google Chrome

For any inquiry regarding a this campaign, please write an email to databenchtoolbox@gmail.com.

Beta test instructions and scenario

The Toolbox is accessible without the need to log in to the system, but the options are limited to pure search. You can see that without registering the options in the menu are very few. 

Initial steps to log in as a Toolbox user

To perform this campaign, we would like all involved users to first sign in into the DataBench Toolbox and create  a user profile that you will use throughout the campaign:

- Go to http://databench.ijs.si/ and click on “Sign up” option located at the top of the page on the right side.
- Fill in the form to generate your new user by providing an username and password of your choice, your organization, email, and your user type (Technical and/or Business, depending on your preferences and skills).

Once you have created your user, please sign in with it into the Toolbox. You will be directed to the Toolbox main page again, where you could check that you have more options available. 

Besides the options available through the menu, the main page provides:
A) a carrousel with links,
B) User journeys for users of different profiles: Technical, Business and Benchmarks providers,
C) Videos aimed at these 3 types of users explaining briefly the main functionalities offered for each of them,
D) Shortcuts to some of the functionalities, such as FAQ, access to the benchmarks or knowledge catalogues, the DataBench Observatory, etc. 

A) For Technical Users

This campaign aims at using the user journeys as starting point to help you navigating the tool. We encourage you to click on the Technical user journey, read it and follow the provided links to get acquainted with the tool and what you can do with it. Get used to the main two catalogues: the benchmarks catalogue (tools for big data and AI benchmarking), and the knowledge nuggets catalogue (providing information about technical and business aspects related to benchmarking and big data technologies). Learn about existing big data architectural blueprints and browse some of them. 

Additionally, if you already have a goal in your mind (i.e. finding a benchmark for testing a specific ML model, or compare the characteristics of different NoSQL databases), we encourage you to try to find the appropriate benchmark and report your conclusions later in the questionnaire. 

Below is a summary of the minimal set of actions we encourage you to do:

  1. Go to the User journeys area of the main page and click on “Technical”. 

2. Read the content of this page, divided into advice for “Beginners” (first-time users) and “Advanced” (providing extra recommendations of what to do next). Focus first on the “Beginners” area and click on the different links to browse to the different options to get used to the tool. We recommend you to come back to the User journey page  until you click on all the available options for beginners, but feel free to stray and use the navigation and links from other pages to get used to the tool. After you finish clicking on all the options for beginners, you should have seen the benchmarks and knowledge nuggets catalogues, used some of the search functionalities and browsed some of the existing architectural blueprints. You are now ready to go further!

3. Focus now on the “Advanced” area of the User journey page 

- Here you will find ways to suggest new content via web forms (i.e. new benchmarks you might know that are missing in the catalogue, a version of a big data blueprint you are dealing with in a project, or a new knowledge nugget based on your experience). We are not expecting you to fill-in these forms at this stage, but just acknowledge their potential value (and feel free to contribute any time).  

- You will find also links to specific more advanced user journeys or practical examples at the end of the advanced user journeys. Click on the ones that take your attention and start navigating via the links offered by them. From this moment we expect that you know the main options of the Toolbox and how to navigate and browse through it. You should have noted by now that both benchmarks and knowledge nuggets are annotated or categorized with clickable tags, which makes navigation through related items possible.  

4. Get used to the search functionalities. The Toolbox offers 4 types of search:
- Search text box located at the top right corner of the pages. This is a full text search. You can enter any text and the results matching that text from both the benchmark and knowledge nuggets catalogues will appear.

- Search by “BDV Reference Model”  option from the menu allows you to have a look at the model created by the BDV PPP community (check the BDV SRIA  for more details). The model is represented graphically and is clickable. If you click in any of the vertical or horizontal layers of the model you will be directed to the benchmarks and/or knowledge annotated in the Toolbox to these layers. Browse through this search.

- Search by “Guided benchmark search” . In simple terms this is a search by the tags used to annotate benchmarks and knowledge nuggets. These tags range from technical to business aspects. You can click on the categories of tags to find related information. Browse to some of the options of this search.  

- Finally, the “Search by Blueprint/Pipeline”  option allows a search that presents graphically a generic architectural blueprint developed in DataBench with the most common elements of a big data architecture. The blueprint is aligned with 4 steps of a DataBench Generic data pipeline  (data acquisition, preparation, analysis and visualization/interaction). The graphic is clickable both at the level of the four steps of the pipeline and in some of the detailed elements of the blueprint. Click on the parts of the diagram you are interested to find a list of existing benchmarks and nuggets related to it. Browse some of them. There are nuggets that show a summary of existing big data tools for each of the elements of the pipeline. See if you find it easy to browse through the results .

Congratulations! You have completed the assignment of this campaign! Go now to fill in the feedback questionnaire. 

NOTE – Some of the available benchmarks can be deployed and run in your premises. Those are listed first in the Benchmark catalogue and when you click on them you will find the configuration file at the bottom of their description. If you want to run any of them, you should have dedicated infrastructure to do so. We are not expecting you to do so in this exercise.

B) For Business users

As for technical users, this campaign aims at using the user journeys as starting point to help you navigating the tool. We encourage you to click on the Business user journey, read it and follow the links given to you to get acquainted with the tool and what you can do with it. Get used to the main two catalogues: the benchmarks catalogue (tools for big data and AI benchmarking), but mainly to the knowledge nuggets catalogue (providing information about technical and business aspects related to benchmarking and big data technologies). Learn about existing big data architectural blueprints and browse to some of them, as they apply to different industries and might be of interest for business purposes.

Additionally, if you already have a goal in your mind (i.e. finding most widely used business KPIs in a specific sector), we encourage you to try to find the appropriate information in the knowledge nugget catalogue and report your conclusions later in the questionnaire. 

Below there is a summary of the minimal set of actions we encourage you to do:

  1. Go to the User journeys area of the main page and click on “Business”. 

2. Read the content of this page, divided into advice for “Beginners” (first-time users) and “Advanced” (providing extra recommendations for what to do next). Focus first on the “Beginners” area and click on the different links to browse to the different options to get used to the tool. We recommend you to come back to this User journey page  until you click on all the available options for beginners, but feel free to stray and use the navigation and links from other pages to get used to the tool. After finishing clicking on all the options for beginners, you should have seen the benchmarks and knowledge nuggets catalogues, used some of the search functionalities and browsed some of the existing architectural blueprints. You are now ready to go further!

3. Focus now on the “Advanced” area of the User journey page .
- You will find links to different elements, such as nuggets related to business KPIs, by industry, etc. Browse through them and follow the links.

- You will find ways to suggest new content via web forms (i.e. a new knowledge nugget based on your experience). We are not expecting you to fill-in these forms at this stage, but just acknowledge their potential value (but feel free to contribute any time).  

- You will find also links to specific more advanced user journeys or practical examples at the end of the advanced user journeys. Click on the ones that take your attention and start navigating via the links offered by them. From this moment we expect that you know the main options of the Toolbox and how to navigate and browse through it. You should have noted by now that both benchmarks and knowledge nuggets are annotated or categorized with clickable tags, which makes navigation through related items possible.  

5. Get used to the search functionalities. The Toolbox offers 4 types of search:
- Search text box located at the top right corner of the pages. This is a full text search. You can enter any text and the results, matching that text from both the benchmark and knowledge nuggets catalogues, will appear.

- Search by “BDV Reference Model”  option from the menu allows you to have a look at the model created by the BDV PPP community (check the BDV SRIA  for more details). The model is represented graphically and is clickable. If you click on any of the vertical or horizontal layers of the model you will be directed to the benchmarks and/or knowledge annotated in the Toolbox to these layers. Browse through this search.

- Search by “Guided benchmark search” . In simple terms this is a search by the tags used to annotate benchmarks and knowledge nuggets. These tags range from technical to business aspects. You can click on the categories of tags to find related information. Browse to some of the options of this search.  

Finally, the “Search by Blueprint/Pipeline”  option allows a search that presents graphically a generic architectural blueprint developed in DataBench with the most common elements of a big data architecture. The blueprint is aligned with 4 steps of a DataBench Generic data pipeline  (data acquisition, preparation, analysis and visualization/interaction). The graphic is clickable both at the level of the four steps of the pipeline and in some of the detailed elements of the blueprint. Click on the parts of the diagram you are interested to find a list of existing benchmarks and nuggets related to it. Browse some of them. There are nuggets that show a summary of existing big data tools for each of the elements of the pipeline. See if you find them it easy to browse through the results .
6. This part of the test is not guided, as we expect you to navigate through the options you have seen previously. Once you know how to navigate, try to find information of interest for your industry or area of interest:
• Try to find information about the most widely used KPIs or interesting use cases.
• Try to find information about architectural blueprints for your inspiration.  

Congratulations! You have completed the assignment of this campaign! Go now to fill in the feedback questionnaire.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Incentives

As a recognition for your efforts and useful feedback, you will be added as a DataBench contributor within our Website. This offer is limited to the beta testers interacting with the team, by 6 December 2020. You will be contacted individually for contribution opportunities. Please, provide a valid contact email during the survey phase.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ back

2871.png

Carsharing Use Case

Car-sharing is a form of person-to-person or collaborative consumption, whereby existing owners rent their cars to other people for short period of time.

▼ campaigns

Car-sharing is a form of person-to-person or collaborative consumption, whereby existing owners rent their cars to other people for short period of time. Essentially, this use case provides a collaborative business model as an alternate to private car ownership allowing customers to use a vehicle temporarily on-demand at a variable fee depending on the distance travelled or usage

Project website:

 

Beta-tester Passengers

Starts on:

01/10/2020

Ends on:

30/11/2020

Estimated Test Duration:

30-45 minutes

Target beta testers profile:

Business users

Beta tester level:

Intermediate

Campaign objectives

The objective of this campaign is to adapt the use case to the market which Agilia Center is aiming to cope with, finding insights that can be transformed into functionalities which can be integrated into the development phase.
After this step, we will include these prerequisites in the roadmap of the service (Service Backlog) for the acceptance tests that will be carried out at the completion of the development stage. The process to test the requirements will encompass a methodology to not only test the previous features but also to extract new information.

Requirements for this campaign

  • Android Device (Android Pie 9)
  • Allow app to installs from Unknown Sources in Android
  • Internet Connection
  • Turn on the location on the phone or use a Fake GPS Application such as Fake GPS location or Fake GPS Free

Beta test instructions and scenario

Introduction

From now, you are going to act as a passenger, which is a person who wants to share a car with other people (at least with a driver) for a short period of time from one point to another.

Instructions

Instructions will be provided within the survey. Please, go to the survey

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

 

Beta-tester Drivers

Starts on:

01/10/2020

Ends on:

30/11/2020

Estimated Test Duration:

30-45 minutes

Target beta testers profile:

Business users

Beta tester level:

Intermediate

Campaign objectives

The objective of this campaignis to adapt the use case to the market which Agilia Center is aiming to cope with, finding insights that can be transformed into functionalities which can be integrated into the development phase.
After this step, we will include these prerequisites in the roadmap of the service (Service Backlog) for the acceptance tests that will be carried out at the completion of the development stage. The process to test the requirements will encompass a methodology to not only test the previous features but also to extract new information.

Requirements for this campaign

  • Android Device (Android Pie 9)
  • Allow app to installs from Unknown Sources in Android
  • Internet Connection
  • Turn on the location or use a Fake GPS Application such as Fake GPS location or Fake GPS Free

Beta test instructions and scenario

Introduction

From now, you are going to act as a driver, which is a person who wants to rent a car and drive it for a short period of time from one point to another.

Instructions

Instructions will be provided within the survey. Please, go to the survey

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ back

decide-logo.jpg

DECIDE

Multicloud Applications Towards the Digital Single Market

▼ campaigns

DECIDE is a new generation of multi-cloud service-based software framework, providing mechanisms to design, develop, and dynamically deploy multi-cloud aware applications in an ecosystem of reliable, interoperable, and legal compliant cloud services.
DECIDE is composed of a set of tools that cover the entire DevOps pipeline, from design and development to deployment and operations. All the tools are integrated via the DevOps framework UI that provides a unified user interface and orchestrates their execution when necessary.

 

DECIDE Platform

Starts on:

20/04/2020

Ends on:

31/12/2020

Estimated Test Duration:

3 hours

Target beta testers profile:

Developers

Beta tester level:

Intermediate

Campaign objectives

By becoming a beta tester of DECIDE you will be able to experience the full DevOps lifecycle of a multi-cloud application via the unified DECIDE DevOps framework UI.
Requirements for this campaign
• Intermediate knowledge on Cloud Computing.  
• Advanced knowledge on DevOps.

Requirements for this campaign

• Intermediate knowledge on Cloud Computing.  
• Advanced knowledge on DevOps.

Beta test instructions and scenario

Install and configure the individual services (check the “Delivery and Usage” sections on each document)
ARCHITECT cloud patterns service
OPTIMUS simulation service
MCSLA service
ACSmI discovery/contracting/monitoring services
ADAPT deployment orchestrator
ADAPT Violation handler
Install and configure DECIDE DevOps framework (check the “Delivery and Usage” section)
DECIDE DevOps framework
Connect to the DevOps framework web interface and follow the workflow to create, deploy and manage a multi-cloud application (check the “User Manual” sections on the aforementioned documents)

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Incentives

ReachOut goodies (ask for it!)

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ back

logo_positivo.png

Cross-CPP

One-stop data shop: Provides a single point of access to data streams from multiple smart products in easily accessible non-proprietary data formats

▼ campaigns

Ecosystem​

✓  Driven by the needs of Data Owners, Data Providers and Data Customers

✓  Brand independent, open platform with standardized interface → Highly attractive for Service Providers

✓  Linking CPP data from different sectors enables higher quality content and new services world

✓  Economical solution for all value chain partners, due to a greater amount of data customers

​✓  Data Providers can profit from Innovation Potentials by thousands of external experts

User Engagement

✓   Empowers CPP owners to exploit their most valuable asset in the Internet of Things – their CPP  data

✓  The owner can fully control which data he provides to which Service Provider

 

UI and UX of Cross-CPP data-marketplace Front-end Application

Starts on:

17/08/2020

Ends on:

30/11/2020

Estimated Test Duration:

15-20 minutes

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner

Campaign objectives

Cross-CPP project have released its integrated Final Prototype of the Data Marketplace (AGORA). It offers a privacy and secure platform to trade-off (buy and sell) Vehicles and Building datasets. 

The objective of this campaign is to get feedback from the User Experience (UX) and User Interface (UI) of the frontend application. We would provide detailed instructions to follow the processes from A) Service Provider (Digital Company, Data-driven startup, etc.) to create "Data Requests" or B) Data Owner to accept these "Request" and transform into Offers and Contracts. 

We would like your feedback to test AGORA solution for improving our experience with a wide range End-Users. Don't lose this opportunity to take part of this EU challenge!

Requirements for this campaign

- Basic Knowledge of web browsing
- Internet connection
- Use preferably Google Chrome

Beta test instructions and scenario

A) For Service Providers (Looking for acquiring Data inside the Marketplace):

  1. Go to https://ng8.datagora.eu/login and click on "Sign in" button on the right upon side of the web
    2. Enter in the EMAIL: "serviceprovider1@test.com" and this password
    3. In the left side of the Application you see "Main Menu", and you click in  "Catalogue section". Apply some filters to the Data Signals Catalogue and search for any signal you are interested.
    4. Go to "Data Discovery". In this view, you can configure and personalize the type of "Data Requests" you aim to retrieve from the Marketplace. Apply and set any filter you consider interesting for your service and finally press on "Discovery". E.g. Signal Type: Vehicle Speed (or more) / Add Suggestions (if needed) / Duration: All years / Location: Spain
    5. Check the "Discovery Results" of available data in the marketplace. Then, check the "Analytics" or "Context Filtering" to get access to all the functionalities. Then, click on "Create Data Request".
    6. Congratulations! You have created your first Data Request, just go to "Main Menu" and click in "Data Wallet" and you can see a drop menu with "Data Requests" and Data Transactions". Click on "Data Requests" to see the one you have created.  

B) For Data Owners (Accepting Data Requests and Exchange my data generated):

  1. Go to https://ng8.datagora.eu/login and click on "Sign in" button on the right upon side of the web
    2. Enter in the EMAIL: "ownertest@test.com" and this password
    3. Go to the upper part of the Application, and firstly select the type of Device from you would accept to share your data. Two categoires "Vehicles" and "Buildings".
    4. In the left side of the Application you see "Main Menu", and you click in the drop menu  "Data Wallet", and you will see "Available Data Requests" and "Accepted Data Requests", "Data Collected" and "Transaction Summary".
    5. Click on "Available Data Requests" and you would find out all the "Data Requests" published by interested Service Providers. You can accept or decline, you as a Data Owner has the choices to share or not your data.
    6. Finally, if you click on "Accepted Data Requests" you will see, the full list of Accepted Requests I have grant my permission to access to data. If you are no longer interested in it, you can withdraw your consent to share data, and the Offer is terminated.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Incentives

As a recognition for your efforts and useful feedback, you will be added as a Cross-CPP contributor within our Website. This offer is limited to the beta testers interacting with the team, by 15 October 2020. You will be contacted individually for contribution opportunities. Please, provide a valid contact email during the survey phase.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ back

IOF2020 Logo_Payoff_RGB.png

IOF2020 Use Case Big Wine Optimization - Remote Wine Analysis product

Optimizing the cultivation and processing of wine by sensor-actuator networks and big data analysis within a cloud framework.

▼ campaigns

What is the objective of the Remote Wine Analysis System?
Perform remote frequent and inexpensive characterization of wine composition in order to preserve maximum expression of grape quality potential throughout winemaking phases.
How does it work?
A spectrophotometer reader – operating in the IR spectrum range IR – able to detect absorbance data of a wine sample in the winery, and send them to the cloud to be elaborated through a calibration curve based on a vast database, finally providing the winery the desired compositional parameters.

 

Remote Wine Analysis Phase 2

Starts on:

26/10/2020

Ends on:

31/12/2020

Estimated Test Duration:

2 hours

Target beta testers profile:

Business users

Beta tester level:

Beginner, Intermediate

Campaign objectives

The testers are asked to use the system performing analysis during harvest period.

Requirements for this campaign

Testers should perform analysis during harvest period.

Beta test instructions and scenario

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Incentives

Testers participating in the campaign with take part in the ReachOut Lottery and may win a prize.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ back

IOF2020 Logo_Payoff_RGB.png

IOF2020  Use Case Beverage Integrity Tracking

Beverage Integrity Tracking is a system based on the IoT technologies, that will allow to monitor the transport conditions and to open a direct communication channel from the producer to the retailer.

▼ campaigns

What is B.I.T. (Beverage Integrity Tracking)?
Tracking wines and other beverages along the transportation chain is the main goal of B.I.T. project.
Increasing economic and strategic relevance of export markets impose to producers to gain control on transportation conditions of their goods and to establish direct contacts with final clients.
B.I.T. uses Internet of Things technologies to obtain data on shipping conditions and on final client satisfaction, allowing beverage producers to exactly know if, when and where accidents occur during transportation (excess heating, low temperatures), and to receive feedbacks from final retailers (comments and/or complaints from wine shops & restaurant clients).
The device code is coupled with a web page or documents corresponding to the wine in the box, reporting all information and marketing arguments the producer wants to communicate to retailers.

 

Beverage Integrity Tracking Test Beds Phase 2

Starts on:

01/07/2020

Ends on:

31/12/2020

Estimated Test Duration:

2 hours

Target beta testers profile:

Business users

Beta tester level:

Beginner, Intermediate

Campaign objectives

The testing producers are asked to upload to the Beverage Integrity Tracking platform information about the shipment, product marketing and thresholds of low/high temperatures.
Testing producers will then communicate the retailer contact details to the IOF2020 project team, who will ask them to download the App to transfer shipment data on the platform, to download product marketing information and if necessary to give a feed back on product at its arrival.
The objective of the campaign is to check that the system works, to collect their feedback about their opinion about system feature and utilisation and to collect data about temperatures during shipments.

Requirements for this campaign

Testers should sell wine to retailers over the summer/beginning of autumn period.

Beta test instructions and scenario

Producers should use the system with real shipments (enter data in platform, activate data logger) that can arrive at destination over the summer/beginning of autumn period.
Producers should put the IOF2020 project team in contact with the retailers willing to close the loop, by downloading the APP and providing their feedback.

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Incentives

Producers can offer a product refund to their retailers (25% of the value topped at 45 euro per beverage box) which will be paid for by IOF2020.
Also, producers participating in the campaign with take part in the ReachOut Lottery and may win a prize.

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ back

articonf-logo-block.jpg

ARTICONF

A novel set of trustworthy, resilient, and globally sustainable decentralised social media services

▼ campaigns

ARTICONF addresses issues of trust, time-criticality and democratisation for a new generation of federated infrastructure, to fulfil the privacy, robustness, and autonomy related promises that proprietary social media platforms have failed to deliver so far.
In order to test the first demo with two tools (TIC and CONF):

For more information about the project and latest news  please visit https://articonf.eu

Project website:

 

ARTICONF v1

Starts on:

22/11/2020

Ends on:

31/12/2020

Estimated Test Duration:

20-30 mins

Target beta testers profile:

Developers

Beta tester level:

Intermediate, Advanced

Campaign objectives

The objective of this campaign is to test the first version of ARTICONF toolset as well as to collect feedback from DApp (Distributed Application Developers) which can be used to adapt the services developed to the market requirements

Requirements for this campaign

  • Personal Computer.
  • Internet Connection.
  • Web Browser - Firefox.
  • Knowledge:
    • Expertise in deployment and with blockchain is not needed to understand the basics about how the toolset works and its benefits.
    • Knowledge in Hyperledger Fabric concepts is required to configure the blockchain network.

Beta test instructions and scenario

Introduction

The project presents two tools integrated with a user interface to interact with them:

  1. Trust and Integration Controller (TIC): enable developers to use and configure a robust Hyperledger Fabric Blockchain Network, allowing users to use the logic, privacy and data consensus in their applications.
  2. Co-located and Orchestrated Network Fabric (CONF): in its current version, is able to automatically deploy the Hyperledger Fabric Network previously mentioned along with the APIs to interact with it. 

To use both tools a user interface has been designed and provided for the testing. This interface allows the utilisation of the tools, and by means of a simple application is able to transfer token between two peers using the blockchain network behind. Moreover, through the navigator, it enables developers or end users to know how the chain is updated after every transaction and to configure the organization in the Hyperledger network.

Instructions

In order to test the provided scenario:

  1. Navigate to following URL: http://tac.uist.edu.mk/beta/testing
    In the interface, use these credentials:
    - user name: beta@testing.com
    - password: ARTICONF
    2. Follow the provided instructions in the previous URL

Feedback questionnaire

When you are done with the testing, please fill in the feedback questionnaire.
Please note that filling in the questionnaire will be your ticket for incentives.

Campaign Mailing List

Please provide your e-mail address below and in the feedback questionnaire, in order to enter the ReachOut incentives programme and to join the mailing list for this campaign, in order to interact with the Campaign Manager. Find out more about Reachout informed consent.

▲ back

Completed Campaigns

STAMP_Logo_RGB_small.svg

STAMP

Software Testing AMPlification for the DevOps Team

▼ campaigns

STAMP stands for Software Testing AMPlification. Leveraging advanced research in automatic test generation, STAMP aims at pushing automation in DevOps one step further through innovative methods of test amplification. 

STAMP reuses existing assets (test cases, API descriptions, dependency models), in order to generate more test cases and test configurations each time the application is updated. Acting at all steps of development cycle, STAMP techniques aim at reducing the number and cost of regression bugs at unit level, configuration level and production stage.

STAMP raises confidence and foster adoption of DevOps by the European IT industry. The project gathers four academic partners with strong software testing expertise, five software companies (in: e-Health, Content Management, Smart Cities and Public Administration), and an open source consortium. This industry-near research addresses concrete, business-oriented objectives.

 

Try the STAMP toolset

Estimated Test Duration:

2 hours

Target beta testers profile:

Developers

Beta tester level:

Beginner

Campaign objectives

Trying the open source toolset is a free initiative that will amplify your testing efforts automatically. Experiment DSpot, Descartes, CAMP or Botsing now.

Requirements for this campaign

Download and try DSpot or Descartes or CAMP or Botsing.

Beta test instructions and scenario

Incentives

You'll have nothing to lose and everything to win, including time and quality in your software releases!
Moreover, you'll be among the first to experiment the most advanced Java software testing tools.

And, as a recognition for your efforts and useful feedback, you will receive a limited edition “STAMP Software Test Pilot” gift and be added as a STAMP contributor. This offer is limited to the beta testers interacting with the team, by 30 October 2019. You will be contacted individually for a customized gift and for contribution opportunities. Please, provide a valid contact email.

Campaign Mailing List

▲ back

logo_wide.png

Energyshield - Security Culture Assessment tool

EnergyShield is a complete state-of-the-art security toolkit for the EPES sector

▼ campaigns

EnergyShield captures the needs of Electrical Power and Energy System (EPES) operators and combines the latest technologies for vulnerability assessment, supervision and protection to draft a defensive toolkit.Adapt and improve available building tools (assessment, monitoring & protection, remediation) in order to support the needs of the EPES sector.Integrate the improved cybersecurity tools in a holistic solution with assessment, monitoring/protection and learning/sharing capabilities that work synergistically.Validate the practical value of the EnergyShield toolkit in demonstrations involving EPES stakeholders.Develop best practices, guidelines and methodologies supporting the deployment of the solution and encourage widespread adoption of the project results in the EPSE sector.

 

Energyshield SBAM Tool

Estimated Test Duration:

between 20 minutes to 30

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner

Campaign objectives

Energyshield has created a first version of the security culture assessment tool. We would like to beta test this first version

Requirements for this campaign

No requirements except internet connection and browser - all browser types and devices are acceptable

Beta test instructions and scenario

For the beta testing campaign - create a user group in the tool, create a campaign, answer a questionnaire and review the results of the assessment. The URL of the website is (http://energyshield.epu.ntua.gr/) . Information and guide of the platform is included here: (https://1drv.ms/w/s!Avx-hU-EvNxviEse2KU6hPqEoY4O?e=Hn5byP)

Incentives

Beta testers will be acknowledged within our website

Campaign Mailing List

▲ back

Reachout_web.svg

ReachOut

Beta-testing campaigns for research projects

▼ campaigns

ReachOut is a Coordination and Support Action (CSA) helping H2020 projects in the area of software technologies to implement beta-testing campaigns. ReachOut act as an operational intermediary between research projects and the open market. ReachOut helps research projects implement beta testing best practices and recruit beta-testers by running promotion initiatives. ReachOut helps develop connections between research projects and potential users and beta-testers.

 

Testing the ReachOut platform

Estimated Test Duration:

30 minutes to 1 hour

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner, Intermediate, Advanced

Campaign objectives

The goal of this campaign is to test the ReachOut platform.
It targets H2020 projects who would like to setup a beta-testing campaign.
You can find out more about ReachOut and the methodology on this page

Requirements for this campaign

In order to start testing the ReachOut platform, you need to have ready:

  • the name, short and long description of your project
  • a logo file for your project
  • the objectives of the beta-testing campaign
  • the estimated duration it will take a beta tester to test your beta-version
  • a beta version of your software available to download
  • requirements for beta testers to test your software (list of pre-installed software, hardware requirements, operating systems constraints, ...)
  • a comprehensive test scenario and instructions
  • (optional) incentives for beta testers to participate in the campaign

Beta test instructions and scenario

In order to test the ReachOut platform, you will need to:

  1. Visit https://reachout-project.eu
  2. Register as a campaign manager
    You will need to provide your details, e-mail address, login and password.
    You will receive a message to your e-mail address with an activation link.
  3. Create your project
    You will have to fill in the project details.
  4. Create your campaign with the appropriate campaign details.
    Note that you can use the XWiki syntax for formatting the details (links, bullets, ...)
  5. Customize the questionnaire in LimeSurvey
    You can do this by clicking on the "Customize and activate the associated questionnaire" button. Log into LimeSurvey using your ReachOut login and password provided during the registration.
    Once in LimeSurvey, you can edit the questions.
  6. Activate the questionnaire in LimeSurvey
  7. Manage the progress of your campaign using the campaign dashboard
    To do this, go back to your home page on the ReachOut website, and click on the Dashboard button below your campaign details. Then, edit the dashboard, save.
  8. Fill in the questionnaire (as a beta tester)
    To do this, log out of LimeSurvey and ReachOut, go to the ReachOut website, click on "Checkout existing campaigns" and fill in the questionnaire.
  9. View the answers on LimeSurvey
    Log into LimeSurvey and go to your campaign on LimeSurvey.
    Then, click on Statistics in the left menu, then on the "Simple mode" button top left. You can view statistics about the answers that have been provided by beta testers.

Incentives

By participating in this survey, you will help the ReachOut project provide a better service to research projects.

Campaign Mailing List

▲ back

Zql

Zql

Java SQL parser

▼ campaigns

Zql is a java SQL parser, generated using JavaCC. It parses SQL constructs (no DDL) and generates a parse tree, accessible through a java API.

 

Zql beta-test

Estimated Test Duration:

10 minutes

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner, Intermediate, Advanced

Campaign objectives

Build and run unit tests.

Requirements for this campaign

Java 5 or above, maven.

Beta test instructions and scenario

- Checkout project using git:

git clone https://github.com/gibello/zql.git

- Build project, using maven:

cd zql/
mvn clean install

▲ back

cropped-melodic-logo-1.png

MELODIC

Open source multicloud management platform with allows for optimization and automation of the deployment to the different Cloud Providers (AWS, Azure, GCP, Open Stack based).

▼ campaigns

MELODIC is cloud agnostic and optimized way to multicloud. It is multi-cloud management platform created within H2020 project with the same name. The MELODIC platform will enable and optimize data-intensive applications to run within defined security, cost, and performance boundaries seamlessly on geographically distributed and federated cloud infrastructures. Applications are modelled and deployed in cloud-agnostic manner. Optimization is done continuously using Reinforcement Learning algorithms. MELODIC is fully open-source, licensed under MPL.

 

MELODIC - multicloud management platform

Estimated Test Duration:

16 hours

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner, Intermediate, Advanced

Campaign objectives

By becoming a beta tester of Melodic you will be able to learn how to use model@runtime with automatic adaptation and optimization of deployment to multicloud. Melodic is an open source multicloud management platform which allows for optimization and automation of the deployment to the different Cloud Providers (AWS and Open Stack based).

Requirements for this campaign

  • Basic knowledge about Cloud Computing. 
  • Access to at least one Cloud Provider.

Beta test instructions and scenario

  1. Install Melodic on your machine as described on the Melodic download page (scenario1)
  2. Deploy simple two component application. scenario2.pdf
  3. Install the Eclipse oxygen-based Camel editor, which enable you to create your model, manual available on Melodic's website (scenario3)
  4. Model and deploy your own application using Melodic platform. scenario4.pdf

Incentives

Melodic badge and certificate. For the first or most active beta testers, we will provide project goodies (mugs, ...).

▲ back

GeoTriples_Spark

GeoTriples-Spark

Publishing geospatial data as Linked Open Geospatial Data. GeoTriples generates and processes extended R2RML and RML mappings that transform geospatial data from many input formats into RDF.

▼ campaigns

Publishing geospatial data as Linked Open Geospatial Data. GeoTriples generates and processes extended R2RML and RML mappings that transform geospatial data from many input formats into RDF.

 

GeoTriples-Spark

Estimated Test Duration:

20 minutes

Target beta testers profile:

Developers

Beta tester level:

Beginner, Intermediate

Campaign objectives

We would like to see if all RML functions can run with no Exceptions

Requirements for this campaign

To be able to execute the project, you will need Java 1.8 (8), maven 3 (or greater) and Spark 2.4.0

To build the code, clone repository https://github.com/LinkedEOData/GeoTriples and build by executing
"mvn package"

You can run experiments using the data in https://drive.google.com/file/d/1CZSjgCsRI4-vK82CR35po8Mix5rCjK7y/view?usp=sharing

You can find more information in the repository

Beta test instructions and scenario

To run a simple experiment run:
spark-submit  master local[*]  class eu.linkedeodata.geotriples.GeoTriplesCMD /path_to/geotriples-spark.jar spark -i /path_to/greece-natural-a/gis_osm_natural_free_1.shp -o /path_to/folder_to_store_results  /path_to/greece-natural-a/gis_osm_natural_free_1.ttl

We would like to check if the RML processor handles the RML term maps as expected (see https://rml.io/specs/rml/#term-map). 

We want the user to be able to provide any term map (mostly constant- or template-valued) and to get the requested results. The term maps are defined by editing the .ttl file (the rr:objectMap fields, there are some examples in the document).
So we would like to check if the program can handle any given term map. So you can try and execute the project with different term maps (mostly template-valued) to see how it handles them.

Campaign Mailing List

▲ back

elastest.png

Elastest

ElasTest Platform is being developed within a public founded project called ElasTest: an elastic platform for testing complex distributed large software systems.

▼ campaigns

ElasTest Platform is being developed within a public founded project called "ElasTest: an elastic platform for testing complex distributed large software systems". Elastest started on January 1, 2017 and will finish on December 31, 2019.

Project website:

 

Try ElasTest Platform

Estimated Test Duration:

4 hours

Target beta testers profile:

Developers

Beta tester level:

Beginner, Intermediate

Campaign objectives

In this campaign, you will discover a testing scenario using ElasTest. A test being executed in ElasTest can make direct use of multiple integrated services (such as Web Browsers), and the tester can see all that monitoring information in the same graphical user interface and with advanced analysis features.

Requirements for this campaign

ElasTest can run in differents platforms like Laptop, Linux VM and Server. For more information about the requirements to launch ElasTest, please visit: https://elastest.io/docs/tutorials/getting-started/

Beta test instructions and scenario

The detailed instructions to execute the beta test are available at: https://elastest.io/docs/try-elastest/

▲ back

CROSSMINER logo.svg

CROSSMINER

CROSSMINER enables the monitoring, in-depth analysis and evidence-based selection of open source components, and facilitates knowledge extraction from large open-source software repositories.

▼ campaigns

CROSSMINER enables the monitoring, in-depth analysis and evidence-based selection of open source components, and facilitates knowledge extraction from large open-source software repositories.

Project website:

 

CROSSMINER Dashboard testing

Estimated Test Duration:

30 minutes

Target beta testers profile:

Business users, Developers

Beta tester level:

Beginner, Intermediate, Advanced

Campaign objectives

introduce dashboards to OSS projects stakeholders

Requirements for this campaign

Web Browser

Beta test instructions and scenario

  • Open http://beta.crossminer.org/app/kibana#/dashboards
  • Select a dashboard, for instance "scava-overview"
  • Select a project from "Project selection" view
  • This has the effect to filter data for the selected projects in other views.
  • you can manage filters from the top left bar of the window.
  • On top right of the window, you can edit the time selection (by default 10 years)
  • Click on "Dashboard" from the left pane to select another dashboard.
  • The same logic can be applied on all dashboards
  • Last part is to understand how dashboards are created. Let's follow this short tutorial: https://www.reachout-project.eu/view/Crossminer/kibana

Campaign Mailing List

▲ back

articonf-logo-block.jpg

ARTICONF

A novel set of trustworthy, resilient, and globally sustainable decentralised social media services

▼ campaigns

ARTICONF addresses issues of trust, time-criticality and democratisation for a new generation of federated infrastructure, to fulfil the privacy, robustness, and autonomy related promises that proprietary social media platforms have failed to deliver so far.
In order to test the first demo with two tools (TIC and CONF):

For more information about the project and latest news  please visit https://articonf.eu

Project website:

 

ARTICONF Crowd Journalism Use Case

Estimated Test Duration:

20 min

Target beta testers profile:

Business users

Beta tester level:

Beginner

Campaign objectives

The main objective of the campaign is to test and evaluate a first version of the crowd streaming ecosystem developed in the context of the ARTICONF project.

The Crowd Journalism ecosystem is composed by three main components.
- A Mobile Application developed for live capture and streaming of news events in which citizens and journalists can transmit in real time a breaking news event that is happening in a particular location.
- A web-based Classifier that aggregates the multiple live news video feeds from the citizens and displays them in a four-player multiviewer where they can be classified according to three different criteria: impact, trustiness and level of information
- A web based Marketplace in which the creators of the news videos can sell them to potential buyers (citizens, news companies). The transactions are made using virtual tokens that can be exchanged for products or services.

The Crowd Journalism platform sits on top of a blockchain based infrastructure in order to ensure anonymity, secure and immutable transactions.

Two main components will be evaluated in this test:
- Mobile crowd streaming application
- Multiviewer/Classifier/Editor

Requirements for this campaign

The requirements for the testing are the following:

- Personal computer with Internet connection
- At least one smartphone with Internet connection and GPS (Wi-Fi, 4G)
- Web Browser (Chrome, Firefox)

It is not necessary to install any component of the ecosystem in your own device.

Beta test instructions and scenario

In order to test the ecosystem please go to the feedback questionary, where the instructions are provided.

Campaign Mailing List

▲ back

logo-modeliosoft.png

CROSSMINER Softeam Use Case Evaluation

Internal Evaluation of CROSSMINER platform in context of Softeam Use Case

▼ campaigns

CROSSMINER enables the monitoring, in-depth analysis and evidence-based selection of open source components, and facilitates knowledge extraction from large open-source software repositories.

 

CROSSMINER Final Evaluation

Estimated Test Duration:

10 minutes

Target beta testers profile:

Business users

Beta tester level:

Beginner

Campaign objectives

Collect feedback of Modeliosoft development team (Softeam) about the deploiment and usage of CROSSMINER platform.

Requirements for this campaign

Modeliosoft development team  only

Beta test instructions and scenario

Answer to the questionnaire based on the experiments on the CROSSMINER platform  of the last 3 months

▲ back

Upcoming Campaigns

They Buy For You

They Buy For You uses open data to build a cross-E...

Save-a-Space

Save-a-Space pre-bookable parking app

IOF2020 User Acceptance Testing - UAT

IOF2020 is a European project that support the dev...

RobMoSys

RobMoSys is a model based development approach for...

SODALITE

SODALITE aims to provide an optimised, highly resi...

RADON

RADON aims to help the European software industry ...

ICARUS

ICARUS aims to deliver a novel framework and archi...

PDP4E

Methods and tools for GDPR compliance through Priv...

BRAIN-IoT

BRAIN-IoT develops solutions for the Next Generati...

INTUITE_AI

Our mission is to unleash the power of sensitive d...

IoF2020 - Business Model Toolbox

...

The Beta-Testing Campaign Platform for Research Projects.

What is ReachOut main objective? ReachOut helps H2020 projects in the area of software technologies to develop beta-testing campaigns for their software. ReachOut helps build bridges between projects and their markets. ReachOut provides projects with end-to-end support to develop and launch beta-testing campaigns so as to enable them to concretely engage with their potential users and develop their ecosystems. 


whatisit.svg

What is Beta-Testing?

Beta testing is intended to collect feedback from customers on a pre-release product to improve its quality. This is the last stage before shipping a product. Not only it helps finalize a product, it is also a marketing tactic that helps develop a base of early adopters. 

news.svg

News and Events


Nov22

Databench Toolbox Campaign is Open
more...

Nov16

Smooth GDPR Campaign is Open
more...

Nov13

SFScon 2020, November 13-14, Bolzano, Italy
more...


community_icon.svg

Community

Be part of the growing ReachOut community. Subscribe here to receive new campaigns, best practices, and recommendations.

envelope_icon.svg

Contact Us

Do not hesitate to write to us directly for any other questions, proposals or partnership enquiries.



Partner Projects

Beneficiaries of H2020 cascade funding projects are welcome to join ReachOut. More.

  • EDI-final
  • NGI_Ledger
  • NGI_Pointer
  • NGI_DAPSI_Tag-color-positive.jpg

The Reachout project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement number 825307.

flagEU.svgThe information in this document is provided “as is”, and no guarantee or warranty is given that the information is fit for any particular purpose. The content of this document reflects only the author`s view – the European Commission is not responsible for any use that may be made of the information it contains. The users use the information at their sole risk and liability.

This wiki is licensed under a Creative Commons 4.0 license
XWiki Enterprise 9.11.8 - Documentation