Quantcast
Channel: Forum SQL Server Database Engine
Viewing all 15694 articles
Browse latest View live

service acct info on this box's sql or the box i'm connected to thru ssms?

$
0
0

Hi I don't have enough "points" at https://stackoverflow.com/questions/7324407/get-service-account-details-of-the-sql-agent-service to question/comment on one of the answers for getting service acct info from sql.

Is it possible that the answer there that offers the solution below is getting  the answer from the local box's sql installation , not necessarily the instance they are connected to in ssms?

I have some pretty good evidence that using sys.dm_server_services returns info about the instance you are connected to so i'm not relying on the solution below.

DECLARE @sn NVARCHAR(128);

EXEC master.dbo.xp_regread
    'HKEY_LOCAL_MACHINE',
    'SYSTEM\CurrentControlSet\services\SQLSERVERAGENT',
    'ObjectName', 
    @sn OUTPUT;

SELECT @sn;


How can I bulk insert into deltastore? HOW TO use SqlBulkCopy on a table with Non-Clostered columnstore index efficiently ?

$
0
0

Hello SQL server community,

We have an application that is loading data events in batches.

The application inserts events into SQL Server table with Bulk insert (System.Data.SqlBulkCopy). (in batches of 1-10000)

We have added a non-clustered columnstore index to the table.

Now each bulk insert results in COMPRESSED row group with the size of the batch and after a while things get a bit inefficient => you get lots of relatively small compressed rowstore groups and using the index becomes MUCH slower. 

Essentially we are back to times when there was no deltastore on NCCIs

Of course you can run a costly REORGANIZE on your NCCI to merge those tiny closed rowgroups into big ones.


If you execute an insert statement, index records it in the deltastore. And things are handled much more efficiently. 

 

Therefore my question: Is there any way to ask SQL server to treat Bulk insert as normal insert when updating columnstore indexes ?

Another formulation: Is there any way to disable BULKLOAD rowgroup trim reason for columnstore index when bulk-loading data?

Thank you very much,

Alexander

This scripts explains the question more precise:

To run it you need to create a file on filesystem for BULK INSERT 

It will create a DB and will clean it up afterwards 

Please use SQL Server 2017 (14.0.3223.3) was used

USE [master]
GO

THROW 51000, 'Create a file C:\TestColumnStoreInservVSBulkInsert.txt with content: "test, 1" and comment this line', 1;  

CREATE DATABASE TestColumnStoreInservVSBulkInsert 
GO
use [TestColumnStoreInservVSBulkInsert]

CREATE TABLE [Table](
[value] [varchar](20) NOT NULL INDEX  IX_1 CLUSTERED,
[state] int not NULL
)

CREATE NONCLUSTERED COLUMNSTORE INDEX [NCI_1] ON [dbo].[Table]
(
[value],
[state]
)WITH (DROP_EXISTING = OFF, COMPRESSION_DELAY = 0) 

insert into [Table] values (('TestInitail'), (1))

DECLARE @IndexStateQuery VARCHAR(MAX)  
SET @IndexStateQuery = 'SELECT i.object_id,   
    object_name(i.object_id) AS TableName,   
    i.name AS IndexName,   
    i.index_id,   
    i.type_desc,   
    CSRowGroups.*
FROM sys.indexes AS i  
JOIN sys.dm_db_column_store_row_group_physical_stats AS CSRowGroups  
    ON i.object_id = CSRowGroups.object_id AND i.index_id = CSRowGroups.index_id   
ORDER BY object_name(i.object_id), i.name, row_group_id;  '

EXEC (@IndexStateQuery)
-- Creates a COMPRESSED rowGroup with 1! record  
--QUESTION: How to make this statement add data to Open Rowgroup ?
BULK INSERT [Table] FROM 'C:\TestColumnStoreInservVSBulkInsert.txt' WITH ( FORMAT='CSV', ROWS_PER_BATCH = 1);

EXEC (@IndexStateQuery)
-- Adds one record to existing open rowgroup 
insert into [Table] select top 1 * from [Table]
EXEC (@IndexStateQuery)

--Costly fix. Merge and recomrpess closed rowgroups
--ALTER INDEX NCI_1   ON [Table] REORGANIZE   
--EXEC (@IndexStateQuery)


--Cleanup
use [master]
alter database [TestColumnStoreInservVSBulkInsert] set single_user with rollback immediate
drop database [TestColumnStoreInservVSBulkInsert]

 

Locks doubt

$
0
0

Hi All,

Heard about readers can also block writers. Can anyone provide a demo example to prove this? I was under the impression that only writers can block readers.

2<sup>ND</sup> question is, will INSERT block a SELECT ?

Thanks,

Sam

SQL Database Error 924 – Database is already open and can only have one user at a time

$
0
0

In a Java-application I am working on, I have an updater, which updates the db structure whenever a new version is started for the first time. 
This updater is executed in single user mode, to ensure no one is working on the db in the meantime. 
During this update, I have to request meta data and this request sometimes leads to the Error 924. 
This bug occurs only since I upgraded the mssql-jdbc driver to 6.3.3 or later and I therefor opened an issue on theirGithub page
However, during the discussion in this issue, it turned out, that neither the driver nor our code opens a new connection. Even the SQL Server Profiler logs only one connection. 
During my research, I found an article, which states, that SINGLE_USER can only be used if AUTO_UPDATE_ASYNC is turned OFF. That's the case in my database, but since async jobs on the SQL Server seem to cause problems with single user mode, my guess is, that some statements used by the driver to load the meta data (starting with Line 988) are executed asynchronously. This presumption is reinforced by the fact, that the error only occurs, if the database is stored on a HDD drive. On SSD drives everything works fine, which sounds like some kind of race condition. 
Our current workaround for this is to catch this error and retry the statement after 3 seconds, until it works. Usually it takes about 5 retries or 15 seconds.

Is there some setting in the SQL Server I am missing or what else could cause this issue?

performance issue.

$
0
0
we are seeing performance issue on our database server. Most of the queries are running slow. its a newly configured server. I want to check if data files /tempdb files are being accessed by anti-virus software. Our admin says mdf/ldf files are by default ignored by this tool. Tool name is Cylance. I would like to check that again to see if thats happening, is there any way to do it?

stale stats

$
0
0

is there a query that i can run to find out stale statistics?

oracle oldeb provider not visible in import/export wizard.

$
0
0

hi All,

I'm trying to export and import data between sql server and Oracle. However in the import/export wizard, i do not see Oracle Oledb provider listed, ive oracle client already installed. I also is under Linked server> Provider there is OraOledb.Oracle listed. 

regarding ola hallengren index maintenance script

$
0
0
Hi All,

Does Ola Hallengren index script takes care of de-fragmenting HEAP tables as well? or just clustered and non-clustered indexes?

Thanks,
Sam



Select * query taking long on 1500 records table and table size is huge.

$
0
0

Hi SQL Gurus,

I'm investigating a performance issue in one of our prod environment. 

1. There is table which contains little over 1500 records but the size is showing as over 3GB and index space over 51GB. I wonder why is that?

2. Simple select * on that table is taking more than 11 seconds to retrieve all records and I see that there is a scan happening on non clustered index which is on Primary key. 



hAny guidance would be appreciated. 

Rebuidling heap tables

$
0
0
Hi All,

We are seeing heavy fragmentation 99% with page count more than 10000 pages.
So,  rebuilding those HEAP Tables using ALTER table tname REBUILD command.

My question is, if I have 100 such tables which of the below 2 would perform better in terms of performance (time, transaction log management etc..)
Database is in FULL recovery model. SQL Server 2012 Enterprise Edition.

ALTER TABLE T1 REBUILD;
ALTER TABLE T2 REBUILD;
ALTER TABLE T3 REBUILD;
ALTER TABLE T4 REBUILD;
:
ALTER TABLE T100 REBUILD;


------------- [OR]-----------------

ALTER TABLE T1 REBUILD;
GO
ALTER TABLE T2 REBUILD;
GO
ALTER TABLE T3 REBUILD;
GO
ALTER TABLE T4 REBUILD;
GO
:
ALTER TABLE T100 REBUILD;
GO

Thanks,
Sam

DB performane is slow ?

$
0
0

Hi All,

Last week we got users complaining about specific database performance and they were saying almost all of the queries are running slow. They provided one sample query which took more than 10 mins, and they mentioned that specific query would actually finish in few seconds.
I looked at the plan, it was doing a full table scan as they were using

SELECT * from tablename where colname in ('val1','val2,'val3'..);

It was returning only 71 rows and I saw the wait type as PAGEIOLATCH_SH. There was no blocking.
I took the plan and explained them and the table was heap suffering from 99.9% fragmentation with more than 10K page count. This is how I made the query to run faster and it took around 4 seconds to complete.


My question is, if all the queries are poorly performing. In that case, where do I start my troubleshooting.
Literally, so many things was coming into my mind and it was overwhelming. I was checking blocking, checking task mgr and check to see resource utilization, Don't know whether other databases are taking more memory, any sql agent jobs running like index fragmentation or backups etc....
Also, that instance was shared among like 50 odd databases. some of them are of 3 TB, 2TB, 1TB and others are round 900 MB. Physical RAM was 96 GB out of which 90GB is allocated to max server memory. Its a dedicated SQL server QA box.

Given all this information, if someone comes and say, all the queries of a specific database on that instance is running slow, then where should I start my troubleshooting. Please advice. Looking for inputs here. We are using SQL 2012 EE. Any specific questions we can ask the application or users ?

Thanks,
Sam


 

What is the use of mapped to certificate, asymmetric key, credentials

$
0
0

Hi DBAs,
Can anyone help me out to understand the concept of 
1) what is the use of mapped to certificate/asymmetric key/credentials
2) how to use mapped to certificate/asymmetric key/credentials
3) in which conditions we can use mapped to certificate/asymmetric key/credentials

which is present on login properties page

Regards,
Yashwant Vishwakarma | MyPage


Yashwant Vishwakarma|www.sqlocean.com

sql memory related question

$
0
0

Hi All,

Need help here.
Why is sql server is not releasing memory to OS even after lowering the max server memory setting from 61GB to 55GB ? Total RAM 64gb on server.
This was onboarded 2 days back. max server memory was not set initially and so it utilized all memory and so we got an alert that 97% mem usage on the server.
Lock pages in memory is disabled. There were no active sessions on the server.
Using Microsoft SQL Server 2016 (SP2) Enterprise Edition: Core-based Licensing (64-bit) on Windows Server 2016 Datacenter 10.0 <X64> (Build 14393: ) (Hypervisor)


Second question is,  what does available_commit_limit_kb from [sys].[dm_os_process_memory] DMV mean in layman terms ???

PFA attached screenshot on task mgr and DMV output.

Queries used :


-- Get configuration values for instance
SELECT @@servername as ServerName,name, cast(value_in_use as int)/1024 as value_in_gb, [description] FROM sys.configurations
where name in ('max server memory (MB)','min server memory (MB)')
ORDER BY name ;


-- OS  memory
SELECT  cast(ROUND([total_physical_memory_kb]/1024./1024.,2) as numeric(36,2)) as total_mem_gb,
        cast(ROUND([available_physical_memory_kb]/1024./1024.,2) as numeric(36,2)) as Avl_mem_gb,
        cast(ROUND([total_page_file_kb]/1024./1024.,2) as numeric(36,2)) as total_page_mem_gb,
        cast(ROUND([available_page_file_kb]/1024./1024.,2) as numeric(36,2)) as Avl_page_mem_gb,
        [system_memory_state_desc]
FROM    [sys].[dm_os_sys_memory] WITH (NOLOCK)
OPTION  (RECOMPILE);
GO



-- SQL Server Process Address space info  
SELECT  cast(ROUND([physical_memory_in_use_kb]/1024./1024.,2) as numeric(36,2)) as total_sql_phy_mem_in_use_gb,
        cast(ROUND([locked_page_allocations_kb]/1024./1024.,2) as numeric(36,2)) as Locked_pages_gb,
        [page_fault_count],
        [memory_utilization_percentage],
        cast(ROUND([available_commit_limit_kb]/1024./1024.,2) as numeric(36,2)) as avl_commit_gb,
        [process_physical_memory_low],
        [process_virtual_memory_low]
FROM    [sys].[dm_os_process_memory] WITH (NOLOCK)
OPTION  (RECOMPILE);
GO


--active sessions
EXEC MASTER..sp_WhoIsActive
       @show_sleeping_spids= 0,  
    @OUTPUT_COLUMN_LIST =
    '[session_id],[blocking_session_id],[dd hh:mm:ss.mss],[start_time],[database_name],[status],[open_tran_count],[login_name],[host_name],[program_name],[sql_command],[sql_text]';
go

Thanks,
Sam




Recover Service Account Password from SQL

$
0
0

Hi Folks,

Configured the SQL Server way back and forgot the password. how to recover/Get the password for the Service Account i used to configure.

Best Regards,

WriteLog wait type

$
0
0

runtime                        waiting_tasks_count  wait_time_ms         max_wait_time_ms     signal_wait_time_ms  wait_type                                                  
------------------------------ -------------------- -------------------- -------------------- -------------------- ------------------------------------------------------------                                                
2019-06-06T14:49:13.390                     1159087             8439614                 9456                51289 WRITELOG                  
2019-06-06T15:26:47.187                     1915753            13861957                 9456                98141 WRITELOG 

I have got the above figures of the wait stats. Seems like it's not so realistic, becauase

I only around half an hour , it has  5,422,343 ms ( 13861957 -8439614 )    which means 5422 s !!!!!!!!!!!!!!!

seems weird

and also the max_wait_time_ms is 9s.


SQL Backup to UNC

$
0
0

Hi 

I used to backup all our prod databases to a media server using a UNC.   About 11 prod servers with about 200 databases. 

It worked without problems.  About 6 months ago we changed to a different service provider to host our servers, in their 'cloud' datacentre.   So it's a much bigger setup than we had before and they support a great many customers in their 'cloud' .

But the backup solution is now unreliable.  Regular backup failures, with errors 64 and 121 , which point to n/w errors.

My question is whether using a UNC is inherently unreliable in such a cloud environment, where performance predictability and reliability seems to have gone down the pan. 

Should I change tack and backup to local storage, then copy the backup files to a remote location, with retries of that copy if it encounters problems.  Or should I be expecting better from an 'enterprise service provider' ?

I'm interested in others thoughts, experiences and opinions.

 

Unsafe assembly 'ddbmasqlclrlib, version=4.7.0.0, culture=neutral, publickeytoken=c0f9ccf99c6c0e66, processorarchitecture=msil' loaded into appdomain 59 (master.dbo[runtime].58).

$
0
0

Hi,

Getting below message in SQL server logs, which in result, application can't connect to SQL Server 

Unsafe assembly 'ddbmasqlclrlib, version=5.7.0.0, culture=neutral, publickeytoken=  , processorarchitecture=msil' loaded into appdomain 59 (master.dbo[runtime].58)

Below error from Monitoring Tool at the same time above message appear in SQL server logs. Any idea?

Cannot connect to SQL Server instance  A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.) : The semaphore timeout period has expired [121]


TARAK

Restoring issue

$
0
0

Hi Hope you people are doing great.

I am facing one issue when restoring.

Issues is:

I have backuped a database from SQL 2008R2 and restored it into SQL 2012.

It was restored successfully but key columns in tables missing. I mean the specified primary key,foreign key and other key columns in 2008r2 database are missing 2012. I am needed to create then again.

In brief, key columns in the tables and relationship between them missing. 

Please explain why it happened and let me know any cautions to be taken while restoring lower editions backup into 2012.

Thanks


Execute script against all databases in an instance and send the result as Email Body

$
0
0

Hi TSQL gurus,

I have a requirement to come up with reporting for indexes for our whole production environment:

1. I found the below script online and modified it according to my requirement and trying to execute it using sp_MSForEachDb undocumented SP so that I can get result from all databases in the instance.  The script should show all Exact Duplicate indexes in a particular database. I'm getting the below error (see below script). 

Script:

--This query will find exact duplicate indexes with exact Key Column List and Exact Included Columns.
--Source: https://www.sqlservercentral.com/articles/finding-and-eliminating-duplicate-or-overlapping-indexes-1
SET NOCOUNT ON
EXEC sp_MSForEachDb 'USE [?]
IF  ''?'' NOT IN (''master'', ''model'', ''msdb'', ''tempdb'')

BEGIN
IF OBJECT_ID(''tempdb..#IndexTemp'') IS NOT NULL DROP Table #IndexTemp --If exist drop the temp table. 
;WITH CTE_INDEX_DATA AS (
       SELECT
              SCHEMA_DATA.name AS Schema_Name,
              TABLE_DATA.name AS Table_Name,
              INDEX_DATA.name AS Index_Name,
              STUFF((SELECT  ', ' + COLUMN_DATA_KEY_COLS.name + '' + CASE WHEN INDEX_COLUMN_DATA_KEY_COLS.is_descending_key = 1 THEN ''DESC'' ELSE ''ASC'' END -- Include column order (ASC / DESC)
                                  FROM    sys.tables AS T
                                                INNER JOIN sys.indexes INDEX_DATA_KEY_COLS
                                                ON T.object_id = INDEX_DATA_KEY_COLS.object_id
                                                INNER JOIN sys.index_columns INDEX_COLUMN_DATA_KEY_COLS
                                                ON INDEX_DATA_KEY_COLS.object_id = INDEX_COLUMN_DATA_KEY_COLS.object_id
                                                AND INDEX_DATA_KEY_COLS.index_id = INDEX_COLUMN_DATA_KEY_COLS.index_id
                                                INNER JOIN sys.columns COLUMN_DATA_KEY_COLS
                                                ON T.object_id = COLUMN_DATA_KEY_COLS.object_id
                                                AND INDEX_COLUMN_DATA_KEY_COLS.column_id = COLUMN_DATA_KEY_COLS.column_id
                                  WHERE   INDEX_DATA.object_id = INDEX_DATA_KEY_COLS.object_id
                                                AND INDEX_DATA.index_id = INDEX_DATA_KEY_COLS.index_id
                                                AND INDEX_COLUMN_DATA_KEY_COLS.is_included_column = 0
                                  ORDER BY INDEX_COLUMN_DATA_KEY_COLS.key_ordinal
                                  FOR XML PATH('')), 1, 2, '') AS Key_Column_List ,
          STUFF(( SELECT  ', ' + COLUMN_DATA_INC_COLS.name
                                  FROM    sys.tables AS T
                                                INNER JOIN sys.indexes INDEX_DATA_INC_COLS
                                                ON T.object_id = INDEX_DATA_INC_COLS.object_id
                                                INNER JOIN sys.index_columns INDEX_COLUMN_DATA_INC_COLS
                                                ON INDEX_DATA_INC_COLS.object_id = INDEX_COLUMN_DATA_INC_COLS.object_id
                                                AND INDEX_DATA_INC_COLS.index_id = INDEX_COLUMN_DATA_INC_COLS.index_id
                                                INNER JOIN sys.columns COLUMN_DATA_INC_COLS
                                                ON T.object_id = COLUMN_DATA_INC_COLS.object_id
                                                AND INDEX_COLUMN_DATA_INC_COLS.column_id = COLUMN_DATA_INC_COLS.column_id
                                  WHERE   INDEX_DATA.object_id = INDEX_DATA_INC_COLS.object_id
                                                AND INDEX_DATA.index_id = INDEX_DATA_INC_COLS.index_id
                                                AND INDEX_COLUMN_DATA_INC_COLS.is_included_column = 1
                                  ORDER BY INDEX_COLUMN_DATA_INC_COLS.key_ordinal
                                  FOR XML PATH('')), 1, 2, '') AS Include_Column_List,
       INDEX_DATA.is_disabled -- Check if index is disabled before determining which dupe to drop (if applicable)
       FROM sys.indexes INDEX_DATA
       INNER JOIN sys.tables TABLE_DATA
       ON TABLE_DATA.object_id = INDEX_DATA.object_id
       INNER JOIN sys.schemas SCHEMA_DATA
       ON SCHEMA_DATA.schema_id = TABLE_DATA.schema_id
       WHERE TABLE_DATA.is_ms_shipped = 0
       AND INDEX_DATA.type_desc IN (''NONCLUSTERED'', ''CLUSTERED'')


--Insert all records into a temp table #IndexTemp with appropriate filters:
SELECT * INTO #IndexTemp
FROM CTE_INDEX_DATA DUPE1
WHERE EXISTS
(SELECT * FROM CTE_INDEX_DATA DUPE2
 WHERE DUPE1.schema_name = DUPE2.schema_name
 AND DUPE1.table_name = DUPE2.table_name
 AND DUPE1.key_column_list = DUPE2.key_column_list
 AND ISNULL(DUPE1.include_column_list, '') = ISNULL(DUPE2.include_column_list, '')
 AND DUPE1.index_name <> DUPE2.index_name)
 AND INDEX_NAME NOT LIKE (''%PK%'')

--Return duplicate tbale_names only 
 SELECT @@SERVERNAME Server_Name, DB_NAME() Database_Name, * from #IndexTemp WHERE table_name IN
    (SELECT table_name FROM #IndexTemp GROUP BY table_name HAVING COUNT(*) > 1)
    ORDER BY table_name
END
'


Result/error:

Msg 102, Level 15, State 1, Line 2
Incorrect syntax near 'IFmastermaster'.
Msg 18054, Level 16, State 1, Procedure sys.sp_MSforeach_worker, Line 92 [Batch Start Line 0]
Error 55555, severity 16, state 1 was raised, but no message with that error number was found in sys.messages. If error is larger than 50000, make sure the user-defined message is added using sp_addmessage. 


2. I'm thinking about using the SQL Agent job on each server to send the email. However, if there is another better strategy then please do let me know as I prefer to send one email per week for the whole environment with Duplicate, Unused, Missing, and Overlapping Indexes included in one email body. 

Thanks in advance for any help


Removing IAM page (3:5940460) failed because someone else is using the object that this IAM page belongs to.

$
0
0

Hello Experts,

Executing below command to move data from one data file to 4 data files on a Test SERVER,


USE [FinanceDB]
GO
DBCC SHRINKFILE (N'FinanceDB_DATA' , EMPTYFILE)
GO

Getting below error,

Msg 1119, Level 16, State 1, Line 20
Removing IAM page (3:5940460) failed because someone else is using the object that this IAM page belongs to.
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Executed DBCC CHECKDB on the database FinanceDb and it comes out clean. There is 0 consistency and 0 allocation error.

Any advise or help will be much appreciated.

Thanks
Jumbol


Viewing all 15694 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>