Welcome to the first Release Notes of 2018! We are happy to see you back =)
Automated Data Distribution: Deleting Data by Delete Side Table
As we announced earlier, you can now delete data from a dataset while loading new data using Automated Data Distribution (ADD).
In this release, we have enhanced this functionality with the option of deleting data by the delete side table.
What deletion options are now available in ADD?
- Deleting data by the data load table
Add the 'x__deleted' column to the data load table. The 'x__deleted' column indicates whether a specific row should be deleted from the dataset when running data load. You can choose to delete data by Connection Point, Fact Table Grain, or attribute values (attribute values in the data load table must match those in the workspaces from which you want to delete data). - Deleting data by the delete side table
Create a delete side table that describes columns by which you want delete data.
You do not have to specify all columns according to the definition of the dataset within your LDM. The delete side table lets you flexibly define criteria by which you can delete data (by any combination of attributes).
TIP: If you do not use ADD yet, we invite you to explore its features to see how you can benefit from it. For assistance with migrating to ADD, contact your GoodData Account Manager.
Learn more:
Automated Data Distribution
Delete Data from Datasets in Automated Data Distribution
Process Schedules for Manual Execution Only
You can now schedule a data loading process that can be executed only manually: via the Data Integration Console or the API.
Such processes do not run at any specified time and do not allow you to set any time interval or condition for execution. You run them manually for specific use cases.
What are some use cases when such schedules can be useful for you?
- Your data is delivered at irregular intervals, therefore you need to run a data load process only when the next data batch is available. For example, you have a schedule that should run after an external process has completed successfully, and you cannot predict how long this external process will take to complete.
- You want to have a schedule that you manually run only in specific situations, for example, running full data load for all your projects; reloading project data in case of inconsistency; restoring particular data from backup; performing one-time corrective actions, and so on.
Learn more:
Scheduling a Process for Manual Execution Only
Retrying Process Schedules when Executed via API
When executing a schedule for a data load process via the API, you can now configure it to retry in case of failure. When a retry delay is specified, the platform automatically re-runs the process if it fails, after the period of time specified in the delay has elapsed.
Steps:
- Verify that the schedule you want to execute has the retry delay set (the
reschedule
property is set). For more information, see schedule properties and Configuring Automatic Retry of Failed Processes. - Add the following section to the API request body:
{
"execution": {
"params": {
"retry": "true"
}
}
}
NOTE: retry
is applied only when the schedule has the retry delay set (that is, the reschedule
property is set). If the schedule does not have it set, retry
is ignored, and the retry delay is not applied to the schedule.
Learn more:
API for executing a process schedule
Configuring Automatic Retry of Failed Processes
Adjusting Column Width in Table Reports Enhanced
When you adjust the column width in a table report holding Shift, the adjusted width becomes default for all columns including those that may be hidden (due to applied filters or permissions) and any columns that may be added to the report in the future.
The adjusted width also becomes default for all dashboard users.
Learn more:
Adjusting Column Width
Enhanced API Method for Applying Filters to Dashboard
When applying filters to an embedded dashboard or a dashboard with embedded content using the API method 'Apply attribute filters to a dashboard', you can now reset a previously applied filter. To do so, set the filter's value to GDC_SELECT_ALL
. Setting the filter to GDC_SELECT_ALL
is similar to clicking Select all for a filter in the GoodData Portal UI.
{
gdc: {
setFilterContext: [
{
label: "attribute.filter",
value: "GDC_SELECT_ALL
"
}
]
}
}
If the dashboard includes a corresponding filter in the GoodData Portal UI (see Filter for Attributes), this filter will get all its values selected (similar to clicking Select All in the filter).
Learn more:
Embedded Dashboard and Report API - Events and Methods (look for 'Apply attribute filters to a dashboard')
Upcoming Upgrade of Project Metadata Objects Rescheduled
As we announced earlier, we are planning to upgrade project metadata objects.
The upgrade will be available in March 2018. We will inform you about the upgrade schedule in future Release Notes.
REMINDER: Upcoming API Updates
The following API update will be implemented:
There will be a backward incompatible change on /gdc/md/<project-id>/userfilters
API which serves data permission assigning to users.
This command can only be used by the owner of domain that the <login
> belongs to. After this change is implemented, you may receive "Unknown user or user URI is wrong."
when you use <login
>. If this happens, use <user-id
> instead.
A separate notice will posted in the future Release Notes when it occurs.