Linkopenbve Data Publishing Studio



By: Nai Biao Zhou | Updated: 2018-05-11 | Comments (2) | Related: More >Database Administration

I noticed that my folders with a few files in them were not being published- I tried right clicking the folders in the project to see if I could select an option to include the folder with the deployment- it's not there, but I did find if I select the files inside the folder and mark them to copy on deployment, it will copy the files and create their folder in the process. Hey @virtualdvid, with my small Azure Synapse (table with some data and a stored procedure, I haven't been able to reproduce this failure - the project created in VS 2019 was able to be published successfully from Azure Data Studio.

Publishing notebooks on GitHub To support collaboration with stakeholders and the data science community at large, you can publish your notebooks in GitHub repositories. You can also use this integration feature as a method to backup notebooks for source code management purposes. The first step to making any Chart Studio Enterprise graph is adding data to the grid. Whether your prefer to type your data directly into the grid, import it from a file or URL or even via SQL, Chart Studio Enterprise makes it easy to import data however you want to. Azure Data Explorer Endpoint. Enter your Cluster URL, AppID, AppKey and TenantID in the Service connection appropriate Fields (Authentication Token Field should be left empty) To create a new service connection go to project settings page (the gear icon in the lower lefthand side).


Problem

With a Microsoft Visual Studio Database Project, we can use version control softwareto manage changes to databases and we may face these problems:

  1. The database project deployment failed, and the error message said“The schema update is terminating because data loss might occur”.
  2. A database object, for example a table, was removed in the project,but the object was still in the target database after a successful deployment.
Solution

When publishing a database project to the target database server using MicrosoftVisual Studio, we can solve these problems through the “Advanced Publish Setting”window. When deploying the dacpac file by using command “SqlPackage.exe”,we can solve these two problems with adding specific parameters.

The solution was tested with Microsoft Visual Studio Community 2017 on Windows10 Home 10.0 <X64>. The DBMS is Microsoft SQL Server 2017 Enterprise EvaluationEdition (64-bit). The sample data was retrieved from theAdventureWorks sample databases.

Add NOT NULL Columns to a Table

We have a staging table “[dbo].[Stage_Special_Offer]” in the databaseproject “DWH_ETL_STORE”. The following screenshot presented the tablestructure. The table has some data.

We added a new column “Offer_Description” with data type “NCARCHAR (50) NOTNULL”. Then we published the project by using the database project publishingwizard. We received this error message:

We found this comment in the “DWH_ETL_Store.publish.sql”:

/* The column [dbo].[Stage_Special_Offer].[Offer_Description] on table [dbo].[Stage_Special_Offer]must be added, but the column has no default value and does not allow NULL values.If the table contains data, the ALTER script will not work. To avoid this issue,you must either: add a default value to the column, mark it as allowing NULLvalues, or enable the generation of smart-defaults as a deployment option. */

Some developers may add a default value to the column. I do not think this isa preferable solution for a table in a data warehouse. A not NULL column usuallywas needed on the basis of business requirements. Adding a default value seems tobypass the requirement unless a business requirement asks to do this. In addition,ETL developers could not find a data integrity error immediately if some bugs inthe ETL process added a NULL value to the not NULL column.

Publish Table Changes by Enabling Smart Defaults with GUI

A preferable solution for a data warehouse table is to enable the generationof smart defaults as a deployment option, and then the ETL process ensures the dataintegrity and validation.

Here is the process to publish a database project with the generation of smartdefaults as a deployment option enabled.

Step 1

Right click the project name in the “Solution Explorer” window andselect “Publish” from the pop-up menu. Configure the “Target Database Settings”as follows, then click on the Advanced button.

Step 2

On the Advanced Publish Settings window, check the “Generate smart defaults,when applicable” checkbox. Then click the “OK” button. This gets us back to the above screen where we can use the Save Profile Asif we want to save these settings for next time.

Step 3

Then click the “Publish” button.

The following screenshot shows the data in the table after the database projectwas published successfully. The new column “Offer_Description” was addedwith empty values.

Publish Table Changes by Enabling Smart Defaults using Command Line

If we want to do this from the command line, we can use this command line syntaxto deploy the project with the command “SqlPackage.exe”.

'C:Program Files (x86)Microsoft SQL Server140DACbinSqlPackage.exe'/Action:Publish /SourceFile:'DWH_ETL_STORE.dacpac' /TargetConnectionString:'DataSource=IDEA-PC;Integrated Security=True;Initial Catalog=DWH_ETL_STORE;'/p:GenerateSmartDefaults=True

This shows the project was deployed successfully with these confirmation messages.

Alter Not NULL Columns in a Table

Sometimes we might receive the same “data loss” error when we changea name of a not NULL column. This might not always happen, it depends on how wechange the column name. If we change the name in the “T-SQL” panel, the error occurs.

To publish the project successfully, we need to change the not NULL column namein the “Design” panel. Note that other versions of Microsoft Visual Studio may providea “Rename” menu item in the “Refactor”context menu.

When we changed the column name in the “Design” panel, a “refactorlog”file “DWH_ETL_STORE.refactorlog”, was generated with these XML elements:

With this “DWH_ETL_STORE.refactorlog”, the publish scripts used “sp_rename”to rename the column, thus no data loss occurred.

A system table “[dbo].[__RefactorLog]”was created in the target database to trace the database refactoring.

Drop Objects In Target But Not In Source

We might find tables deleted from the database project are still in the targetdatabase. The database deployment could add new tables, but the deployment did notremove unused tables. We can solve this through the “Advanced Publish Settings”.

Check the checkbox “Drop objects in target but not in source”in the “Advanced Publish Settings” window as shown in the screenshot.This will remove objects that were deleted from the database project.

Linkopenbve Data Publishing Studio App

Linkopenbve data publishing studio app

Here is the command line syntax to deploy the project with the command “SqlPackage.exe”.

'C:Program Files (x86)Microsoft SQL Server140DACbinSqlPackage.exe'/Action:Publish /SourceFile:'DWH_ETL_STORE.dacpac' /TargetConnectionString:'DataSource=IDEA-PC;Integrated Security=True;Initial Catalog=DWH_ETL_STORE;'/p:GenerateSmartDefaults=True /p:DropObjectsNotInSource=True /p:UnmodifiableObjectWarnings=False

References

[1] AdventureWorks sample databases.Retrieved March 2, 2018https://msdn.microsoft.com/en-us/library/ms124825(v=sql.100).aspx/.

[2] Walkthrough: Apply DatabaseRefactoring Techniques. Retrieved April 7, 2018https://msdn.microsoft.com/en-us/library/dd193272(v=vs.100).aspx/.

Next Steps
  • In practice, we usually use scripts to deploy a database project to targetservers automatically. This document aboutSqlPackage.exe is very helpful to construct the automation scripts. MicrosoftVisual Studio also provides a function to compare the schema between the databaseproject and the target database. We can use this function to ensure all of thenot NULL column name changes were captured in the “refactorlog”file. This file should be stored under the version control system.
  • Check out these related tips:

Last Updated: 2018-05-11PublishingDesign



Linkopenbve data publishing studio software
About the author
Nai Biao Zhou is a Senior Software Developer with 20+ years of experience in software development, specializing in Data Warehousing, Business Intelligence, Data Mining and solution architecture design.
View all my tips

Linkopenbve Data Publishing Studio Software


Linkopenbve Data Publishing Studio Tutorial