Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Sv translation
languageja

 

Note

BellaDati 2.9からビッグデータセットが利用可能です

Big Data Sets are a special type of data sets which can be used to store very large amount of data and build pre-calculated cubes. The main differences between standard data sets and big data sets are:

  • Reports cannot be built directly on big data sets. Cubes have to be created first.
  • In big data sets, it is not possible to browse all data. Only random data sample is available. Filters, edit function and delete function is not available in the data sample.
  • Big data sets cannot be joined.

Tip

Main advantage of Big Data Sets is the ability to create pre-calculated cubes resulting in rapid speedup of reports loading times.

ビッグデータセットの作成

Note

Please note that Big Data Set functionality needs to be enabled in the license and in the domain.

Big Data Set can be created by clicking on the link Create big data set in the Action menu on Data Sets page, filling in the name of the big data set and clicking on Create.

ビッグデータセット サマリページ

The landing page (summary page) is very similar to standard data set summary page. There are a left navigation menu and the main area with basic information about the data set:

  • description,
  • date of last change,
  • records count,
  • cubes overview,
  • import history.

データのインポート

Data can be imported to big the data the same way as to standard data set. Users can either import data from a file or from a data source. However, big data set is not using standard indicators and attributes, but instead, each column is defined as an object. These objects can have various data types:

  • text,
  • date,
  • time,
  • datetime,
  • GEO point,
  • GEO JSON,
  • long text,
  • boolean,
  • numeric.

After import, users can open the data sample page to see a randomly selected part of the data.

オブジェクトの管理

Objects (columns) can be created automatically during the import or they can be defined on the data model page. When adding a new object, users can specify its name, data type, indexation and whether they can contain empty values or not. Please note that GEO point, GEO JSON, long text, boolean and numeric cannot be indexed.

Objects can be also edited and deleted by clicking on the row.

キューブ

Cube is a data table which contains aggregated data from the big data set. Users can define the aggregation and also limit the data by applying filters. Data from the cube can be then imported to a data set. Each big data set can have more than one cube and each cube can have different settings.

キューブの作成

To create a cube, users need to follow these steps:

  1. Click on Create cube
  2. Fill-in the name and optionally the description.
  3. Select which columns (attribute elements and data elements). Attribute elements define the aggregation of the cube. For example, if the user selects column Country, the data will be aggregated for each country (one row = one country). Users can also create formula indicators. In the real-time, users can also see the preview of the cube on the right side of the screen. Please note that the preview is built on the data sample only, which means that it can be empty, although some data will be imported to the data set after the execution. It possible to change the order of attributes and indicators by using the arrows located next to the names.
  4. Optionally, users can also apply filters to work with only part of the data.
    1. In the filters, users can reference first and last values from different data set by using following functions:

      Code Block
      languagejs
      ${firstValue(DATA_SET_CODE,L_ATTRIBUTE)}
      ${lastValue(DATA_SET_CODE,L_ATTRIBUTE)}
      ${firstValue(DATA_SET_CODE,M_INDICATOR)}
      ${lastValue(DATA_SET_CODE,M_INDICATOR)}

      The function has to be added as a custom value to the filter.

    2. It is also possible to add a filter formula. This allows users to create more complex filter algorithms. It is also possible to use function getLastSuccessfulCubeExecution() to get the date and time of the last succeful cube execution.

      Code Block
      languagegroovy
      def f = createFilter()
      andFilter(f, 'M_TIMESTAMP_INDICATOR', 'GT', timestamp(datetime(getLastSuccessfulCubeExecution().toString('yyyy-MM-dd HH:mm:ss'))))
      return f
  5. Select destination data set and mapping. By using the search field, users have to select destination data set. After execution, data will be imported from the cube to this data set. After choosing the data set, users have to specify the mapping. Each column of the cube has to be assigned to an attribute or indicator of the destination data set. Attribute elements can be mapped to attribute columns in destination data set. Data elements can be mapped to indicator columns and also attribute columns.

  6. Set up execution schedule. The execution can be run manually, on data change or by schedule. When scheduling the execution, users can specify following parameters:
    1. Batch size (default 1000) - the number of rows which will be executed in one batch. In special cases, it might be beneficial to increase or decrease the value. However, in most cases, we strongly suggest to leave in on default.

    2. Workers count (default 8) - the number of workers which should be used for parallel execution.

    3. Execution timeout [s] - sets the maximum duration of the execution.
    4. When - time of the first execution.

    5. Schedule - how often should be executed.

    6. Import Method - what should happen with data in the destination data set. See Data overwriting policy for more information.

キューブのサマリ

 On the Cubes page, users can see a table with all cubes associated with the big data set. For each cube, information about the schedule and last event are available. Users can also edit the cube by clicking anywhere on the row or on the name of the cube. Several actions are also available for each cube:

  • History - see a list of previous executions and their result.
  • Run - manually run the execution.
  • Schedule - reschedule the execution. Previous settings will be overwritten.
  • Delete - delete the cube.

キューブの実行

As mentioned above, execution can be run manually, on data change or by schedule.

  • Manual execution - by clicking on Run in the Action column, users can start the execution manually. They can also select the import method, which can be different than the one used for scheduled execution.
  • On data change - every time there is a change in the big data set, the execution will be started.
  • Scheduled execution - execution will be run periodically after after specified amount of time.

 

Users can also cancel the next scheduled execution by clicking on the date in the column Schedule and confirming the cancellation. Please note this will only cancel the execution and it won't delete it. After running the execution manually, the schedule will be restored. To delete the scheduled execution completely, users have to edit the cube and delete the Execution schedule.

ビッグデータセットのバックアップ

When using the XML backup of big data set, the target data set and mapping in the cube is not stored. After restoring, they have to be set up again.