Quantcast
Channel: Forum SQL Server Database Engine
Viewing all 15694 articles
Browse latest View live

Bulk Insert error for exponential values during import in sqlserver

$
0
0

We got the following error during bulk insert in SQL server 

Microsoft SQL Server 2014 (SP1-GDR) (KB4019091) - 12.0.4237.0 (X64) 
Web Edition (64-bit) on Windows NT 6.3 <X64> (Build 9600: ) (Hypervisor)

BULK INSERT ABC

FROM ‘D:\Dump001.csv’

WITH (FIELDTERMINATOR = '|',ROWTERMINATOR = '\n', FIRSTROW=2)

Msg 4864, Level 16, State 1, Line 3

Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 552612, column 148 (column_amt).

Msg 4864, Level 16, State 1, Line 3

Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 635686, column 148 (column_amt).

Msg 4864, Level 16, State 1, Line 3

Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 657871, column 148 (column_amt).

Msg 4864, Level 16, State 1, Line 3

Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 675625, column 148 (column_amt).

Msg 4864, Level 16, State 1, Line 3

Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 675628, column 148 (column_amt).

Msg 4864, Level 16, State 1, Line 3

Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 699484, column 148 (column_amt).

Msg 4864, Level 16, State 1, Line 3

Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 898948, column 148 (column_amt).

Msg 4864, Level 16, State 1, Line 3

Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 997300, column 148 (column_amt).

Msg 4864, Level 16, State 1, Line 3

Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1214311, column 148 (column_amt).

Msg 4864, Level 16, State 1, Line 3

Bulk load data conversion error (type mismatch or invalid character for the specified codepage) for row 1494400, column 148 (column_amt).

We found the exponential values in those columns like 1.0001666E7.

Please help us to resolve this issue or any way for this. 


rajesh


Modifying sp_help_revlogin to replicate Logins

$
0
0

Hi, 

We have modified sp_help_revlogin to exclude logins which are created on that server. Although such logins are scripted out but can not be created on other server.  

Machine Name: DBServer1

Login : DBServer1\login1

We can not create DBServer1\login1 on DBServer2,

Also if machine name has some small latter in it's name, same is converted into ALL Capital letters while adding logins. 

If you create login2 on DBserver2, you would found that DBSERVER2\login2, and not DBserver2\login2,  has been created. we handled that case also. Will copy the modified code below if that helps someone. 

In latest versions of SSMS, completion time is inserted in the query output and I couldn't find a way to turn it off using TSQL or power shell. We are unable to automate the login creation process on other machine by consuming the output of modified sp_help_revlogin.

-- Login: test4
IF NOT EXISTS (SELECT name from master.sys.syslogins WHERE name = 'test4') CREATE LOGIN [test4] WITH PASSWORD = 0x0200B3F29E427D5BAD24E3E59296991F1AC949A56FA01A3D5838F4CC96F53076EF0653AC967C7D9500B7CCAFEF2E5D42D2BCC3205123336A2ECB1107B063C3ABE11F2DEF24E5 HASHED, SID = 0x61335A88737D6D429C817738CF7534E7, DEFAULT_DATABASE = [master], CHECK_POLICY = OFF, CHECK_EXPIRATION = OFF
 
Completion time: 2020-06-23T22:36:14.0419382-07:00

Appreciate your insightful response - Thank you!!

Modified sp_help_revlogin

IF OBJECT_ID ('sp_help_revlogin') IS NOT NULL
  DROP PROCEDURE sp_help_revlogin
GO
CREATE PROCEDURE sp_help_revlogin @login_name sysname = NULL AS
DECLARE @name sysname
DECLARE @type varchar (1)
DECLARE @hasaccess int
DECLARE @denylogin int
DECLARE @is_disabled int
DECLARE @PWD_varbinary  varbinary (256)
DECLARE @PWD_string  varchar (514)
DECLARE @SID_varbinary varbinary (85)
DECLARE @SID_string varchar (514)
DECLARE @tmpstr  varchar (1024)
DECLARE @is_policy_checked varchar (3)
DECLARE @is_expiration_checked varchar (3)

DECLARE @defaultdb sysname

IF (@login_name IS NULL)
  DECLARE login_curs CURSOR FOR

      SELECT p.sid, p.name, p.type, p.is_disabled, p.default_database_name, l.hasaccess, l.denylogin FROM 
sys.server_principals p LEFT JOIN sys.syslogins l
      ON ( l.name = p.name ) WHERE p.type IN ( 'S', 'G', 'U' ) AND p.name <> 'sa' AND p.name <> '##MS_PolicyTsqlExecutionLogin##' AND p.name <> '##MS_PolicyEventProcessingLogin##'
                    AND CHARINDEX(UPPER (CONVERT (VARCHAR(128), SERVERPROPERTY('MachineName'))) + '\',p.name)=0
        AND p.name NOT LIKE 'NT SERVICE\%' AND p.name NOT LIKE 'NT AUTHORITY\%'
ELSE
  DECLARE login_curs CURSOR FOR


      SELECT p.sid, p.name, p.type, p.is_disabled, p.default_database_name, l.hasaccess, l.denylogin FROM 
sys.server_principals p LEFT JOIN sys.syslogins l
      ON ( l.name = p.name ) WHERE p.type IN ( 'S', 'G', 'U' ) AND p.name = @login_name
OPEN login_curs

FETCH NEXT FROM login_curs INTO @SID_varbinary, @name, @type, @is_disabled, @defaultdb, @hasaccess, @denylogin
IF (@@fetch_status = -1)
BEGIN
  PRINT 'No login(s) found.'
  CLOSE login_curs
  DEALLOCATE login_curs
  
END
SET @tmpstr = '/* Logins creations script '
PRINT @tmpstr
SET @tmpstr = '** Generated ' + CONVERT (varchar, GETDATE()) + ' on ' + @@SERVERNAME + ' */'
PRINT @tmpstr
PRINT ''
WHILE (@@fetch_status <> -1)
BEGIN
  IF (@@fetch_status <> -2)
  BEGIN
    PRINT ''
    SET @tmpstr = '-- Login: ' + @name
    PRINT @tmpstr
    IF (@type IN ( 'G', 'U'))
    BEGIN -- NT authenticated account/group

      SET @tmpstr = 'IF NOT EXISTS (SELECT name from master.sys.syslogins WHERE name = ''' + @name + ''')' + ' CREATE LOGIN ' + QUOTENAME( @name ) + ' FROM WINDOWS WITH DEFAULT_DATABASE = [' + @defaultdb + ']'
    END
    ELSE BEGIN -- SQL Server authentication
        -- obtain password and sid
            SET @PWD_varbinary = CAST( LOGINPROPERTY( @name, 'PasswordHash' ) AS varbinary (256) )
        EXEC sp_hexadecimal @PWD_varbinary, @PWD_string OUT
        EXEC sp_hexadecimal @SID_varbinary,@SID_string OUT

        -- obtain password policy state
        SELECT @is_policy_checked = CASE is_policy_checked WHEN 1 THEN 'ON' WHEN 0 THEN 'OFF' ELSE NULL END FROM sys.sql_logins WHERE name = @name
        SELECT @is_expiration_checked = CASE is_expiration_checked WHEN 1 THEN 'ON' WHEN 0 THEN 'OFF' ELSE NULL END FROM sys.sql_logins WHERE name = @name

            SET @tmpstr = 'IF NOT EXISTS (SELECT name from master.sys.syslogins WHERE name = ''' + @name + ''')' + ' CREATE LOGIN ' + QUOTENAME( @name ) + ' WITH PASSWORD = ' + @PWD_string + ' HASHED, SID = ' + @SID_string + ', DEFAULT_DATABASE = [' + @defaultdb + ']'

        IF ( @is_policy_checked IS NOT NULL )
        BEGIN
          SET @tmpstr = @tmpstr + ', CHECK_POLICY = ' + @is_policy_checked
        END
        IF ( @is_expiration_checked IS NOT NULL )
        BEGIN
          SET @tmpstr = @tmpstr + ', CHECK_EXPIRATION = ' + @is_expiration_checked
        END
    END
    IF (@denylogin = 1)
    BEGIN -- login is denied access
      SET @tmpstr = @tmpstr + '; DENY CONNECT SQL TO ' + QUOTENAME( @name )
    END
    ELSE IF (@hasaccess = 0)
    BEGIN -- login exists but does not have access
      SET @tmpstr = @tmpstr + '; REVOKE CONNECT SQL TO ' + QUOTENAME( @name )
    END
    IF (@is_disabled = 1)
    BEGIN -- login is disabled
      SET @tmpstr = @tmpstr + '; ALTER LOGIN ' + QUOTENAME( @name ) + ' DISABLE'
    END
    PRINT @tmpstr
  END

  FETCH NEXT FROM login_curs INTO @SID_varbinary, @name, @type, @is_disabled, @defaultdb, @hasaccess, @denylogin
   END
CLOSE login_curs
DEALLOCATE login_curs
RETURN 0
GO


nonpreemptive mode longer than 1000ms

$
0
0

on SQL 2014 I get some "long sync IO : scheduler 10 had 1 Sync IOs in nonpreemptive mode longer than 1000ms" in the SQL log

I am aware of 

https://bobsql.com/how-it-works-sync-ios-in-nonpreemptive-mode-longer-than-1000-ms/

https://blogs.msdn.microsoft.com/psssql/2008/03/03/how-it-works-debugging-sql-server-stalled-or-stuck-io-problems-root-cause

https://blogs.msdn.microsoft.com/psssql/2010/03/24/how-it-works-bob-dorrs-sql-server-io-presentation/

In this post I am interested in how to capture the statement/user/etc that generates this IO wait using EE.

If I enable errors_reported event would that capture?

Or should I enable another filtering on a specific wait? or pattern match filter from the message?

Please let me know ideas for an EE session troubleshoot.

thank you,



      SQL Server 2019 vs SQL Server 2016 Performance .

      $
      0
      0

      I am using the HammerDB to perform stress test on 2019 and 2016 . However 2019 by default seems has around 15 % degradation base on the TPM

      Here is the HammerDB setting

      1 virtual user, 1 data warehouse. , test run for 2mins .

      TPM 2019 : Avg 70000

      TPM 2016: Avg 80000

      Both are on VM and on the same drive on my computer. while WIN19-SQL19    vs WIN16-SQL16

      I have tried to disable all the new database scope features in SQL 2019 , but no help . . Any tweak we have to do / or any one tried these kind of test ?

      SQL Agent fails to run via schedule but succeeds if run manually - why??

      $
      0
      0

      Hello,

      I have a SQL Agent job that runs a simple stored procedure to execute "sp_send_dbmail".  I have configured the "Schedule Type" to "Start automatically when SQL Agent starts".  Essentially whenever the SQL Server is rebooted or the SQL Agent is restarted, I want to know about it via an email.  

      This job always fails with this message "Unable to connect to SQL Server '(local)'.  The step failed." when it runs.  However if I manually trigger the job, it always succeeds.  Also, I logged on to the server using the same credential that runs the job and it worked fine as well.  It only fails when it's unattended.

      Can someone tell me why?

      Thanks


      On-Premise SQL SERVER Database backup to Azure Storage Account as Blob Type= Block Blob and NOT Page Blob?

      $
      0
      0

      Hi All,
      Recently I was working on Data Warehouse Disaster Recover. Our SQL server is on-premise and on VM. I need to take backup of 15-16 Databases ans store them out of VM. Size of these databases were around 1 TB every day and was planning to store minimum 5 days of backup so roughly was looking for 6-7 TB space and it should be outside of VM. After checking with infra, found that with current setup, they can't arrange 6-7 TB of space outside of VM. 

      I started exploring Azure Blob Storage to store our backup on Azure Cloud and found this thread: https://www.mssqltips.com/sqlservertip/4900/perform-onpremises-sql-server-database-backups-using-maintenance-plans-to-azure-blob-storage/

      I followed steps from above thread and was able to take the backup and store on Azure Storage account as Page Blob and not Block Blob (Conclusion:If you use SQL Server Maintenance Plans to take the backup to URL then you are forced to use Azure Storage account Key to create Credentials under Security and use this to connect to Storage account then your backup file will be stored as Page Blob and we have no control to change this to Block Blob: https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-backup-to-url?view=sql-server-2017. To Store your backup as Block Blob, you must use SAS (Shared Access Signature) but then you can't use SQL Server Maintenance Plans with SAS.)

      Next step was to Cleanup the 5 days Older Backup Files. 1st I thought to use Azure Blob "Lifecycle Management" but then I realized it works only with Block Blob and not with Page Blob. Then I used PowerShell Script to cleanup the older .bak Page Blob files: https://social.msdn.microsoft.com/Forums/en-US/df73b5a3-d27b-45ee-b440-c1bed99f9b4e/how-to-automate-bak-delete-from-azure-blob?forum=sqlnetfx


      After doing above all, everything was running perfectly and after 7 days, I had 5 days active backup and 2 days Soft Delete backup. I thought of calculating cost for storing 7-8 TB as page blob every month and cost surprised me. It was going to cost me 800-900 AUD every month which was very very high compare to Block Blob: https://docs.microsoft.com/en-us/answers/questions/39483/calculating-price-for-page-blob-in-azure-storage-a.html 

      Next I started looking for options to store my Backup files as Block Blob and not as page Blob. After spending 1-2 days I found that It is possible to take backup using Shared Access Signature but not going to work with SQL Server Maintenance Plans. This link helped me to write SQL Code to take Backup using SAS and store it as Block Blob: https://blog.sqlauthority.com/2018/07/17/sql-server-backup-to-url-script-to-generate-credential-and-backup-using-shared-access-signature-sas/
      Note: I generated Ad-hoc SAS with following settings: Initially I did not check "Service" Under "Allowed resource types" and I was getting error "Azure Storage Explorer - Inadequate resource type access"


      So 1st I created a Credentials using SAS under Security in SQL Server:
      https://stuart-moore.com/creating-azure-blob-storage-account-for-sql-server-backup-and-restore-via-the-portal/
      CREATE CREDENTIAL [https://<storageaccount>.blob.core.windows.net/<container>]
      WITH IDENTITY='SHARED ACCESS SIGNATURE'
       , SECRET = 'SAS token'
      So in our example that would be:

      view sourceprint?
      CREATE CREDENTIAL [https://dbatoolslas.blob.core.windows.net/sql]
      WITH IDENTITY='SHARED ACCESS SIGNATURE'
       , SECRET = 'sv=2018-03-28&amp;ss=b&amp;srt=c&amp;sp=rwdlac&amp;se=2019-04-03T17:20:25Z&amp;st=2019-04-03T09:20:25Z&amp;spr=https&amp;sig=PLpxNQCW%2FftHsC2NFgR3f4UUSIGGOtRRPLyLG5G90Ak%3D'

      Common mistakes when creating this type of credetial are:

       1. leaving a trail space on the URL in the name
       2. Not removing the ? at the start of the SAS token
       3. Case insensitivity in the SAS token
       4. Not setting the IDENTITY value correctly, it must be SHARED ACCESS SIGNATURE

      2nd created a Job using below query and scheduled it.
      IF OBJECT_ID('tempdb..#ListOfDatabases') IS NOT NULL
          DROP TABLE #ListOfDatabases

      SELECT D.name AS SysDatabaseName, 
             CAST(SUM((F.size * 8) / 1024) AS VARCHAR(26)) + ' MB' AS DBSize, 
             ROW_NUMBER() OVER(ORDER BY SUM((F.size * 8) / 1024)) AS RowNumber
      INTO #ListOfDatabases
      FROM sys.master_files F
           INNER JOIN sys.databases D ON D.database_id = F.database_id
      WHERE D.name IN ('master','msdb')
      GROUP BY D.name
      ORDER BY RowNumber


      DECLARE 
      @Date AS NVARCHAR(25), 
      @TSQL AS NVARCHAR(MAX), 
      @ContainerName AS NVARCHAR(MAX), 
      @StorageAccountName AS VARCHAR(MAX), 
      @SASKey AS VARCHAR(MAX), 
      @DatabaseName AS SYSNAME, 
      @InitialValue INT, 
      @MaxValue INT

      SELECT @Date = FORMAT(GETDATE(), 'yyyy_MM_dd_hh_mm_ss_tt')
      SELECT @StorageAccountName = 'Your storage Account' --- Find this from Azure Portal
      SELECT @ContainerName = 'Your Blob Container' --- Find this from Azure Portal
      SELECT @InitialValue = 1
      SELECT @MaxValue = MAX(RowNumber) FROM #ListOfDatabases

      WHILE @InitialValue <= @MaxValue
          BEGIN
              SELECT @DatabaseName = SysDatabaseName FROM #ListOfDatabases WHERE RowNumber = @InitialValue
              SELECT @TSQL = 'BACKUP DATABASE [' + @DatabaseName + '] TO '
              SELECT @TSQL+='URL = N''https://' + @StorageAccountName + '.blob.core.windows.net/' + @ContainerName + '/' + @DatabaseName + '_full_backup_' + @Date + '.bak'''
              SELECT @TSQL+=' WITH COMPRESSION, CHECKSUM, NOFORMAT, NOINIT, SKIP, REWIND, NOUNLOAD, STATS = 10'
              SET @InitialValue = @InitialValue + 1
              EXECUTE (@TSQL)
          END

      3rd, created Lifecycle Management to delete any Block Blob files older than 5 Days.


      https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-backup-to-url?view=sql-server-2017
      Backup to block blob vs. page blob
      There are two types of blobs that can be stored in the Microsoft Azure Blob storage service: block and page blobs. SQL Server backup can use either blob type depending upon the Transact-SQL syntax used: If the storage key is used in the credential, page blob will be used; if the Shared Access Signature is used, block blob will be used.

      Backup to block blob is only available in SQL Server 2016 or later version. Backup to block blob instead of page blob if you are running SQL Server 2016 or later. The main reasons are:

      Shared Access Signature is a safer way to authorize blob access compared to storage key.
      You can backup to multiple block blobs to get better backup and restore performance, and support larger database backup.
      Block blob is cheaper than page blob.
      Customers that need to backup to page blobs via a proxy server will need to use backuptourl.exe.
      Backup of a large database to blob storage is subject to the limitations listed in Managed instance T-SQL differences, limitations, and known issues.

      If the database is too large, either:

      Use backup compression or
      Backup to multiple block blobs


      Conclusion:If you use SQL Server Maintenance Plans to take the backup to URL then you are forced to use Azure Storage account Key to create Credentials under Security and use this to connect to Storage account then your backup file will be stored as Page Blob and we have no control to change this to Block Blob: https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-backup-to-url?view=sql-server-2017. To Store your backup as Block Blob, you must use SAS (Shared Access Signature) but then you can't use SQL Server Maintenance Plans with SAS and you need to use T-SQL to take the backup

      If you want to Convert Page Blob to Block Blob and Vice-Versa then follow this link: https://www.rossmc.co.uk/2020/04/26/ConvertPageBlobToArchive.html

      Question: Anyone has better ideas, I am open to discuss.


      Thanks Shiven:) If Answer is Helpful, Please Vote

      SQL Server 2016 SP2 CU4 update failed for sql tool extension

      $
      0
      0

      I have already a SQL Server 2016 SP2 (with CU KB4458621) and would like to install CU4 however seems that the patch sequencer didn't detect SP2 was installed for  sql_tools_extensions

      sql_tools_extensions_Cpu64_1.log

      SOFTWARE RESTRICTION POLICY: C:\965cbc8e038e3\x64\setup\sql_tools_extensions.msp is permitted to run at the 'unrestricted' authorization level.
      SequencePatches starts. Product code: {AB765DC7-7642-4D1C-BEDC-035516CCD224}, Product version: 13.0.1601.5, Upgrade code: {BB3795F0-6ECF-450B-8D03-2A264C5F95DF}, Product language 1033
      PATCH SEQUENCER: verifying the applicability of QFE patch C:\965cbc8e038e3a2d191fea70aca95a\x64\setup\sql_tools_extensions.msp against product code: {AB765DC7-7642-4D1C-BEDC-035516CCD224}, product version: 13.0.1601.5, product language 1033 and upgrade code: {BB3795F0-6ECF-450B-8D03-2A264C5F95DF}
      PATCH SEQUENCER: QFE patch C:\965cbc8e038e3\x64\setup\sql_tools_extensions.msp is not applicable.
       SequencePatches returns success.
      Final Patch Application Order:
       Other Patches:
      Unknown\Absent: {D16C247A-EE4D-4709-ACB7-10799D13BA42} - C:\965cbc8e038e3\x64\setup\sql_tools_extensions.msp
      Product: SQL Server 2016 Client Tools Extensions - Update '{D16C247A-EE4D-4709-ACB7-10799D13BA42}' could not be installed.

      Windows Installer installed an update. Product Name: SQL Server 2016 Client Tools Extensions. Product Version: 13.0.1601.5. Product Language: 1033. Manufacturer: Microsoft Corporation. Update Name: {D16C247A-EE4D-4709-ACB7-10799D13BA42}. Installation success or error status: 1642.

       Note: 1: 1708
      Product: SQL Server 2016 Client Tools Extensions -- Installation failed.

      Windows Installer installed the product. Product Name: SQL Server 2016 Client Tools Extensions. Product Version: 13.0.1601.5. Product Language: 1033. Manufacturer: Microsoft Corporation. Installation success or error status: 1642.

      any hints ?

      Deadlock analysis - what's AutoCreateQPStats?

      $
      0
      0

      I'm trying to analyze a deadlock that I can reproduce fairly consistently. I can tell what one session is doing (splitting an empty partition function range), but the other one is a bit of a mystery - it's somewhere in a lengthy stored procedure and the deadlock XML says

      transactionname="AutoCreateQPStats"

      I haven't found any explanation of what that means.

      (I can't tell exactly where in the stored procedure this is occurring because the line number in the execution stack is inside an IF block that I have proved isn't getting executed in this test, so SQL Server and I are counting lines very differently. If anyone can explain that I'd also be interested, but this is probably the wrong forum for it.)


      On-Premise SQL SERVER 2016 Database backup to Azure Storage Account as Blob Type= Block Blob and NOT Page Blob?

      $
      0
      0

      Hi All,
      Recently I was working on Data Warehouse Disaster Recover. Our SQL server 2016 is on-premise and on VM. I need to take backup of 15-16 Databases ans store them out of VM. Size of these databases were around 1 TB every day and was planning to store minimum 5 days of backup so roughly was looking for 6-7 TB space and it should be outside of VM. After checking with infra, found that with current setup, they can't arrange 6-7 TB of space outside of VM. 

      I started exploring Azure Blob Storage to store our backup on Azure Cloud and found this thread: https://www.mssqltips.com/sqlservertip/4900/perform-onpremises-sql-server-database-backups-using-maintenance-plans-to-azure-blob-storage/ and https://sqlbak.com/blog/sql-server-backup-to-url

      I followed steps from above thread and was able to take the backup and store on Azure Storage account as Page Blob and not Block Blob (Conclusion:If you use SQL Server Maintenance Plans to take the backup to URL then you are forced to use Azure Storage account Key to create Credentials under Security and use this to connect to Storage account then your backup file will be stored as Page Blob and we have no control to change this to Block Blob: https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-backup-to-url?view=sql-server-2017. To Store your backup as Block Blob, you must use SAS (Shared Access Signature) but then you can't use SQL Server Maintenance Plans with SAS.)

      Next step was to Cleanup the 5 days Older Backup Files. 1st I thought to use Azure Blob "Lifecycle Management" but then I realized it works only with Block Blob and not with Page Blob. Then I used PowerShell Script to cleanup the older .bak Page Blob files: https://social.msdn.microsoft.com/Forums/en-US/df73b5a3-d27b-45ee-b440-c1bed99f9b4e/how-to-automate-bak-delete-from-azure-blob?forum=sqlnetfx


      After doing above all, everything was running perfectly and after 7 days, I had 5 days active backup and 2 days Soft Delete backup. I thought of calculating cost for storing 7-8 TB as page blob every month and cost surprised me. It was going to cost me 800-900 AUD every month which was very very high compare to Block Blob: https://docs.microsoft.com/en-us/answers/questions/39483/calculating-price-for-page-blob-in-azure-storage-a.html 

      Next I started looking for options to store my Backup files as Block Blob and not as page Blob. After spending 1-2 days I found that It is possible to take backup using Shared Access Signature but not going to work with SQL Server Maintenance Plans. This link helped me to write SQL Code to take Backup using SAS and store it as Block Blob: https://blog.sqlauthority.com/2018/07/17/sql-server-backup-to-url-script-to-generate-credential-and-backup-using-shared-access-signature-sas/.Also remember this works with SQL SERVER 2016. I have one Server which is SQL Server 2014 and It is not working on 2014 version so with 2014 version you have only option to use Storage Account Key and that will be Page Blob and not the Block Blob. https://blog.sqlauthority.com/2017/06/22/sql-server-unable-restore-url-specified-url-points-block-blob-backup-restore-operations-block-blobs-not-permitted/
      Note: I generated Ad-hoc SAS with following settings: Initially I did not check "Service" Under "Allowed resource types" and I was getting error "Azure Storage Explorer - Inadequate resource type access"


      So 1st I created a Credentials using SAS under Security in SQL Server:
      https://stuart-moore.com/creating-azure-blob-storage-account-for-sql-server-backup-and-restore-via-the-portal/
      CREATE CREDENTIAL [https://<storageaccount>.blob.core.windows.net/<container>]
      WITH IDENTITY='SHARED ACCESS SIGNATURE'
       , SECRET = 'SAS token'
      So in our example that would be:

      view sourceprint?
      CREATE CREDENTIAL [https://dbatoolslas.blob.core.windows.net/sql]
      WITH IDENTITY='SHARED ACCESS SIGNATURE'
       , SECRET = 'sv=2018-03-28&amp;ss=b&amp;srt=c&amp;sp=rwdlac&amp;se=2019-04-03T17:20:25Z&amp;st=2019-04-03T09:20:25Z&amp;spr=https&amp;sig=PLpxNQCW%2FftHsC2NFgR3f4UUSIGGOtRRPLyLG5G90Ak%3D'

      Common mistakes when creating this type of credetial are:

       1. leaving a trail space on the URL in the name
       2. Not removing the ? at the start of the SAS token
       3. Case insensitivity in the SAS token
       4. Not setting the IDENTITY value correctly, it must be SHARED ACCESS SIGNATURE

      2nd created a Job using below query and scheduled it.
      IF OBJECT_ID('tempdb..#ListOfDatabases') IS NOT NULL
          DROP TABLE #ListOfDatabases

      SELECT D.name AS SysDatabaseName, 
             CAST(SUM((F.size * 8) / 1024) AS VARCHAR(26)) + ' MB' AS DBSize, 
             ROW_NUMBER() OVER(ORDER BY SUM((F.size * 8) / 1024)) AS RowNumber
      INTO #ListOfDatabases
      FROM sys.master_files F
           INNER JOIN sys.databases D ON D.database_id = F.database_id
      WHERE D.name IN ('master','msdb')
      GROUP BY D.name
      ORDER BY RowNumber


      DECLARE 
      @Date AS NVARCHAR(25), 
      @TSQL AS NVARCHAR(MAX), 
      @ContainerName AS NVARCHAR(MAX), 
      @StorageAccountName AS VARCHAR(MAX), 
      @SASKey AS VARCHAR(MAX), 
      @DatabaseName AS SYSNAME, 
      @InitialValue INT, 
      @MaxValue INT

      SELECT @Date = FORMAT(GETDATE(), 'yyyy_MM_dd_hh_mm_ss_tt')
      SELECT @StorageAccountName = 'Your storage Account' --- Find this from Azure Portal
      SELECT @ContainerName = 'Your Blob Container' --- Find this from Azure Portal
      SELECT @InitialValue = 1
      SELECT @MaxValue = MAX(RowNumber) FROM #ListOfDatabases

      WHILE @InitialValue <= @MaxValue
          BEGIN
              SELECT @DatabaseName = SysDatabaseName FROM #ListOfDatabases WHERE RowNumber = @InitialValue
              SELECT @TSQL = 'BACKUP DATABASE [' + @DatabaseName + '] TO '
              SELECT @TSQL+='URL = N''https://' + @StorageAccountName + '.blob.core.windows.net/' + @ContainerName + '/' + @DatabaseName + '_full_backup_' + @Date + '.bak'''
              SELECT @TSQL+=' WITH COMPRESSION, CHECKSUM, NOFORMAT, NOINIT, SKIP, REWIND, NOUNLOAD, STATS = 10'
              SET @InitialValue = @InitialValue + 1
              EXECUTE (@TSQL)
          END

      3rd, created Lifecycle Management to delete any Block Blob files older than 5 Days.There is a catch with Lifecycle Management. What happens if your backup is keep failing and there is no new file and older file you keep deleting. So It is better to use PowerShell Script and  Delete only when there is a new file.  https://social.msdn.microsoft.com/Forums/en-US/df73b5a3-d27b-45ee-b440-c1bed99f9b4e/how-to-automate-bak-delete-from-azure-blob?forum=sqlnetfx

      4th, Was taking a backup of Lager Databases (400 GB) and failed with following error: 

      "Write on "" failed: 1117(The request could not be performed because of an I/O device error.) [SQLSTATE 42000] (Error 3202)  BACKUP DATABASE is terminating abnormally. [SQLSTATE 42000] (Error 3013).

      " and here is the FIX: https://docs.microsoft.com/en-us/archive/blogs/sqlcat/backing-up-a-vldb-to-azure-blob-storage Conclusion: If using Backup to URL to create striped backups of large databases (over 48 GB per stripe), specify MAXTRANSFERSIZE = 4194304 and BLOCKSIZE = 65536 in the BACKUP statement.

      Replace SELECT @TSQL+=' WITH COMPRESSION, CHECKSUM, NOFORMAT, NOINIT, SKIP, REWIND, NOUNLOAD, STATS = 10' with  SELECT @TSQL+=' WITH COMPRESSION, MAXTRANSFERSIZE = 4194304, BLOCKSIZE = 65536, CHECKSUM, NOFORMAT, NOINIT, SKIP, REWIND, NOUNLOAD, STATS = 10' in query mentioned a 2nd step. 


      https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-backup-to-url?view=sql-server-2017
      Backup to block blob vs. page blob
      There are two types of blobs that can be stored in the Microsoft Azure Blob storage service: block and page blobs. SQL Server backup can use either blob type depending upon the Transact-SQL syntax used: If the storage key is used in the credential, page blob will be used; if the Shared Access Signature is used, block blob will be used.

      Backup to block blob is only available in SQL Server 2016 or later version. Backup to block blob instead of page blob if you are running SQL Server 2016 or later. The main reasons are:

      Shared Access Signature is a safer way to authorize blob access compared to storage key.
      You can backup to multiple block blobs to get better backup and restore performance, and support larger database backup.
      Block blob is cheaper than page blob.
      Customers that need to backup to page blobs via a proxy server will need to use backuptourl.exe.
      Backup of a large database to blob storage is subject to the limitations listed in Managed instance T-SQL differences, limitations, and known issues.

      If the database is too large, either:

      Use backup compression or
      Backup to multiple block blobs


      Conclusion:If you use SQL Server Maintenance Plans to take the backup to URL then you are forced to use Azure Storage account Key to create Credentials under Security and use this to connect to Storage account then your backup file will be stored as Page Blob and we have no control to change this to Block Blob: https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-backup-to-url?view=sql-server-2017. To Store your backup as Block Blob, you must use SAS (Shared Access Signature) but then you can't use SQL Server Maintenance Plans with SAS and you need to use T-SQL to take the backup

      If you want to Convert Page Blob to Block Blob and Vice-Versa then follow this link: https://www.rossmc.co.uk/2020/04/26/ConvertPageBlobToArchive.html

      Question: Anyone has better ideas, I am open for discussion. 


      Thanks Shiven:) If Answer is Helpful, Please Vote








      Able to overwrite the existing database by SSIS but not thru the job schedule for the same SSIS if old data and log files are there..

      $
      0
      0

      Hello friends... I am facing weird scenario.

      I am trying restore few databases from Prod to Test (say Server P to Server T. I created SSIS package thru VS-2017 and able to successfully without any issues even there are old data log files are there in Data and log folders.

      I deployed the SSIS files to our central SSIS Central server (say Server C). Then I created jobs on Server C and they are failing if old data and log files are there. ( I tested whether the SSIS process will overwrite the data files by detaching the database) The jobs are not able to overwrite those detached files. But when I execute the SSIS package from VS-2017 they are overwriting them.

      I used Replace option in Restore db T-SQL in SSIS. By the way I added SQL Server, Agent service accounts of the three servers as LOCAL Admin group on Server T. When I saw SSIS report I got the following message...

      Restore Databases:Error: The operating system returned the error '5(Access is denied.)' while attempting 'RestoreContainer::ValidateTargetForCreation' on 'E:\Microsoft SQL Server\MSSQL15.MSSQLSERVER\MSSQL\DATA\\XXXXXX.mdf'.    

      Any clues? THANKS in Advance.

       

      Order of Memory

      $
      0
      0

      Hello Everyone,

      May I know what is the correct order of Memory internal in sql server.

      For ex- RAM->SQL buffer pool->Memory nodes->Memory clerks & Memory broker..etc..

      I might be incorrect but can some one provide the order of internal how sql memory uses in what order.

      Thanks for all your help in advance!


      Regards, S_NO "_"

      SP slow on one spid but fine on another

      $
      0
      0

      SQL Server 2016

      Is there anything that can cause executions on one spid to have problems but not another?

      I have a stored procedure which is called on average about once a second and usually completes in under 30 millisecs.

      I have an example of it being very slow on one spid - about 7 seconds per call.

      dm_exec_requests shows it being blocked - but clearly not for the 30 secs. Other things seemed blocked for the full 30 secs but not this.

      Oddly all the executions on one spid were slow (about 7 secs) for about 30 seconds but those on another spid worked normally. 4 calls on the slow spid and 17 on the fast one.

      During the period there were only ever two concurrent calls and they were all serviced by those two spids.

      There's no difference between the calls apart from an ID passed to access information. I doubt if the calls could coincidentally cause everything on one spid to access locked data but not the other.

      After this period that spid was fine again.

      Should system table update statistic and rebuild index?

      $
      0
      0

      Dear all,

      Should I set master, msdb and tempdb under weekly maintenance plan of update statistics task and rebuild index?

      Thanks,

      Tim

      SQL Server 2019: Adaptive memory grant information not in execution plan XML

      $
      0
      0

      Hi guys,

      Sorry if I've missed a question relating to this, but I've recently upgraded a server to sql server 2019, which as I understand should be able to use adaptive memory grant feedback. I've tried running the example described here:

      https://blog.sqlauthority.com/2019/11/28/sql-server-memory-grant-feedback-memorygrantinfo-and-ismemorygrantfeedbackadjusted/

      I follow the steps, but the information I would expect to be exported the execution plan for the memory grant feedback is not present at all. I've checked the compatibility levels, and they are set 150, which should be 2019.

      Any ideas on how to get this information? I suspect there's something that needs to be enabled for adaptive memory grant feedback, but I haven't found any documentation on what this might be.

      Do you need column level encryption if you have database level encryption?

      $
      0
      0

      Hi,    I think I have heard that ms sql has database level encryption.   That seems really smart.   Anyone trying to access the data outside of the database like system admins with good access would get nothing if the database was encrypted.   We are big company I'm sure almost all of our ms sql database are encrypted now.   In our application a while back before there was database encryrption, we went to a lot of trouble to encrypt certain sensitive columns.  I would argue we don't really have to specifically encrypt those columns any more.  With database encryption, access is the key to bad people seeing what they aren't supposed to see.  So if someone has the access they can see the column, and if they don't have access they aren't going to see it outside the database.   If bad people somehow get access, encrypting the column isn't going to keep them from looking at it.   So with full database encryption and a good dba who only gives people access to what they're supposed to see, I would argue column level encryption isn't necessary anymore.  Am I missing something?

             Also just in case anyone knows.  Do Teradata and Oracle have full database encryption now?

        

      Default trace was stopped because of an error. Cause: 0x80070057(The parameter is incorrect.).

      $
      0
      0

      Hello, 

      We have MS SQL server 2016 Enterprise (SP2-CU5). After starting the server, the default trace file grows to 20 MB, and then stops with this error in ErrorLog:

      Trace ID '1' was stopped because of an error. Cause: 0x80070057(The parameter is incorrect.). Restart the trace after correcting the problem.

      Error: 19099, Severity: 16, State: 1.

      sys.traces is empty and sp_configure "default trace enabled" config_value and run_value is 1.

      Any thoughts how to solve this?

      SQL 2014 autoupdate stats

      INSERT/SELECT involving sys.views query generates gibberish in "truncated value" error when values wouldn't be truncated

      $
      0
      0
      The following INSERT/SELECT is intended to insert a list of unique index column names into a user table, excluding tables specified in a separate user table:

      INSERT src (refid,columnid,columnsequence)
      SELECT si.name,sc.name,sic.index_column_id 
      FROM sys.indexes si, sys.objects so, sys.index_columns sic, sys.columns sc 
      WHERE si.object_id = so.object_id 
      AND sic.object_id = si.object_id 
      AND sic.index_id = si.index_id 
      AND sc.object_id = si.object_id 
      AND sc.column_id = sic.column_id 
      AND so.type = 'U' 
      AND si.is_primary_key = 0 
      AND si.is_unique = 1 
      AND so.name NOT IN  
       (
        SELECT tableid FROM ignoretab 
       )

      For reference:
      create table ignoretab (tableid NVARCHAR(40))

      create table src (
        refid NVARCHAR(40),
        columnid NVARCHAR(40),
        columnsequence NUMERIC(18,0))

      Our software uses SQL Server as backend, and this example works as intended in all but in 3 of many databases with essentially same data model. In those, the following error occurs:

      Msg 2628, Level 16, State 1, Line 29
      String or binary data would be truncated in table 'testbad.dbo.src', column 'refid'. Truncated value: '珐葻ʒ.P...睐葻ʒ..........Ì.몀롿Ì..ʒ.༠脡ʒ.᫠葸ʒ.'.
      The statement has been terminated.

      Problematic because:
      1. the truncated value is gibberish
      2. none of the values that should be inserted are > 40 characters

      The SELECT without INSERT returns "refid" values all <= 30. Internal indexes with names longer names exist do exist e.g., plan_persist_query_template_parameterization_cidx, but these are filtered out as type IT.

      Apparently length checking is done "too early", thus generating the truncation error that should not actually occur.

      Error vs. no error is apparent as different execution plans. Understood that different data might result in different plans, but with essentially same data, it is not clear what is "wrong" in these few databases to result in a different plan.

      Normally we rebuild indexes/refresh stats, but these are data dictionary views. So the question is what can be done in the "bad" databases to bring them in line with the majority? One answer might be hints, but arguably hints aren't necessary in the majority.
          
      Any recommendations would be appreciated.

      SQL Server VLDB - 4TB DB Migration

      $
      0
      0
      How to migrate 4TB of database from Datacenter-01 to datacenter-02 on different locations

      MRVSFLY

      SQL Server 2012 Database engine failed to install everytime

      $
      0
      0

      Hi,

      I am trying to install SQL Server 2012 -64 bit in our dev server running Windows Server 2012. Everytime, Database engine fails to install. I've checked the event logs it shows.

      Operating system error 2(The system cannot find the file specified.) occurred while creating or opening file 'e:\sql11_main_t.obj.x86release\sql\mkmastr\databases\objfre\i386\modellog.ldf'. Diagnose and correct the operating system error, and retry the operation.

      I followed the link- https://serverfault.com/questions/447808/sql-server-express-service-is-not-starting to resolve the issue. I opened command prompt and run the command - NET START MSSQL$SQLEXPRESS /f /T3608

      But it throws the error - The service name is invalid.

      Has anyone gone through this issue. Please suggest a solution for this issue.

      Thanks,


      Kunal

      Viewing all 15694 articles
      Browse latest View live


      <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>