How SQL Server can protect your digital currency
Mon, 30 Mar 2020 18:46:49 -0000|
Read Time: 0 minutes
Do you wonder if your data is under attack? When should we worry if our data is safe and secure? What precautionary steps we have taken to protect data? Can we eliminate data breaching? In this article I want to introduce some of the great security features built into SQL 2017. No product can prevent all risk of data loss or unauthorized access.The best defense is a combination of good products, knowledgeable people, and rigorous processes design with data protection at all levels of the organization.
Let us start by understanding what data is?
There were relatively few methods to create and share data before the advent of computers – primarily paper and film. Today there are many ways to create, store and access digital data (0’s and 1’s). Data may be a collection of raw facts, data may be a numbers or words, data may be a recorded information of something or someone and in typical digital language data is binaries. Digital data is much easier to create, share, transfer and store in digital forms, such as an email, digital images, digital movies, e-books but also much more difficult to secure.
The digital data ecosystem
Most data can be classified as structured and unstructured. Most of data being created today is unstructured. With the advancement of computer and communication technologies, the rate of data generation and sharing has increased exponentially.
In simple terms, structured data is typically stored using a database management system (DBMS) in rows and columns. Structured data is easily searchable by basic algorithms. Unstructured data is pretty much everything else and does not have a predefined data model. Unstructured nature is much more difficult to retrieve and process. Numerous sources and techniques (data mining, natural language processing (NLP) and text analysis) are evolving rapidly by industry to analyze, derive, manage and store both unstructured and structured data.
In 1988, Merrill Lynch cited a rule of thumb somewhere around 80 to 90% all potentially usable business information may have originated in unstructured form. IDC and EMC projected that data will grow to 40 zettabytes by 2020.
The chart below shows the amount of the data generated every minute in social media according to Domo's Data Never Sleep 5.0 report.
It is not necessary to store all the unwanted data. IDC predicts that by 2025 nearly 20% of the data in global data sphere will be critical to our daily lives. Organization should have a prior plan to store right amount of data and how to extract the business value, the value for human experience and personal value. That's the choice and its a definitive challenge.
Following chart provides a view of the total number of records containing personal and other sensitive data that have been in compromised between Jan 2017 and March 2018.
As per Gartner forecast the total spending of cyber security by the organizations world wide were up by 8% from 2017 and the predicted number is $96 Billions in 2018.
What could possibly go wrong?
A data breach is when confidential information exposed or compromised by intentional or unintentional means.
Malware: Any type of virus, including worms, ransomware, spyware and Trojans which gains access to damage a computer without the knowledge of the owner. Malware is usually injected and installed on a machine by tricking to user to install or access a program from the internet.
Password attack: Brute force attacks can be very successful for gaining access to systems with insecure passwords. 81% of confirmed data breaches involved weak, default or stolen passwords.
Phishing: Capitalizing on our apparent human need to click things, phishing campaigns try to get the recipient to open an infected attachment or click and equally infectious link.
Social Engineering: Email or phone call contact with authorized user of sensitive data for obtaining personal information that can be used in an attack to gain unauthorized access.
One ounce of prevention is worth a pound of cure.
SQL 2017 is equipped with many features to help secure and protect your data from breaches. With SQL server, security is just so well integrated, it’s literally something you mostly just turn on. For example, it is extremely easy to encrypt data on disk, on the wire and in memory, which is big.
- Always Encrypted (Secure at rest in motion): Large amounts of data lead to added complexity. Data is queried, transmitted, backed up, and replicated nearly and constantly. With all that activity, any link in the chain could be a potential vulnerability. Always Encrypted, enables encryption of sensitive data inside application and on the wire, while never revealing the encryption keys to the database engine. As a result, always encrypted provides a separation between those who own the data and those who manages the data.
- SQL Dynamic data masking prevents the abuse of sensitive data by controlling what users can access the unmasked data.
- SQL Server Authentication ensures that only authorized users have access by requiring valid credentials to access the data in databases.
- SQL Server 2017 audit is the primary auditing tool in SQL Server, enabling you to track and log server-level events as well as individual database events. It uses extended events to help create and run audit-related events. SQL server audit components are SQL Server Audit, SQL Server Specification and Database Audit Specification.
- Row-Level Security, helps database administrator to implement restricted access to the specific engineer or a user to the rows in a database table. This makes the security system more reliable and robust by reducing the systems surface area
SQL Server Provides enterprise-grade security capabilities on Windows and On Linux. All built in.
Transparent Data Encryption
Transport layer Security (SSL/TLS)
Control Access (Database Access/Application Access)
SQL Server Authentication
Active Directory Authentication
Row Level Security
Dynamic Data Masking
Tracking Activities (Fine-grained Audit)
Summary: Digitization has led to an explosion of new data that is not expected to abate anytime soon. As data continues to play a vital role in our future, Cyber Criminals are causing organizations to spend ever increasing amounts of money every year to protect data. It is important that organizations get the most value from these investment in data protection.
DATA IS DIGITAL CURRENCY
Related Blog Posts
Database security methodologies of SQL Server
Mon, 01 Jun 2020 23:35:25 -0000|
Read Time: 0 minutes
In general, security touches every aspect and activity of an information system. The subject of security is vast, and we need to understand that security can never be perfect. Every organization has unique way of dealing with security based on their requirements. In this blog, I describe database security models and briefly review SQL Server security principles.
A few definitions:
- Database: A collection of information stored in computer
- Security: Freedom from danger
- Database security: The mechanism that protects the database against intentional or accidental threats or that protects it against malicious attempts to steal (view) or modify data
Database security models
Today’s organizations rely on database systems as the key data management technology for a large variety of tasks ranging from regular business operations to critical decision making. The information in the databases is used, shared, and accessed by various users. It needs to be protected and managed because any changes to the database can affect it or other databases.
The main role of a security system is to preserve integrity of an operational system by enforcing a security policy that is defined by a security model. These security models are the basic theoretical tools to start with when developing a security system.
Database security models include the following elements:
- Subject: Individual who performs some activity on the database
- Object: Database unit that requires authorization in order to manipulate
- Access mode/action: Any activity that might be performed on an object by a subject
- Authorization: Specification of access modes for each subject on each object
- Administrative rights: Who has rights in system administration and what responsibilities administrators have
- Policies: Enterprise-wide accepted security rules
- Constraint: A more specific rule regarding an aspect of an object and action
Database security approaches
A typical DBMS supports basic approaches of data security—discretionary control, mandatory control, and role-based access control.
Discretionary control: A given user typically has different access rights, also known as privileges, for different objects. For discretionary access control, we need a language to support the definition of rights—for example, SQL.
Mandatory control: Each data object is labeled with a certain classification level, and a given object can be accessed only by a user with a sufficient clearance level. Mandatory access control is applicable to the databases in which data has a rather static or rigid classification structure—for example, military or government environments.
In both discretionary and mandatory control cases, the unit of data and the data object to be protected can range from the entire database to a single, specific tuple.
Role-based access control (RBAC): Permissions are associated with roles, and users are made members of appropriate roles. However, a role brings together a set of users on one side and a set of permissions on the other, whereas user groups are typically defined as a set of users only.
Role-based security provides the flexibility to define permissions at a high level of granularity in Microsoft SQL, thus greatly reducing the attack surface area of the database system.
RBAC mechanisms are a flexible alternative to mandatory access control (MAC) and discretionary access control (DAC).
- Objects: Any system, resource file, printer, terminal, database record, etc.
- Operations: An executable image of a program, which upon invocation performs some function for the user.
- Permissions: An approval to perform an operation on one or more RBAC-protected objects
- Role: A job function within the context of an organization with some associated semantics regarding the authority and responsibility conferred on the user assigned to the role.
For more information, see Database Security Models — A Case Study.
Note: Access control mechanisms regulate who can access which data. The need for such mechanisms can be concluded from the variety of actors that work with a database system—for example, DBA, application admin and programmer, and users. Based on actor characteristics, access control mechanisms can be divided into three categories – DAC, RBAC, and MAC.
Principles of SQL Server security
A SQL Server instance contains a hierarchical collection of entities, starting with the server. Each server contains multiple databases, and each database contains a collection of securable objects. Every SQL Server securable has associated permissions that can be granted to a principal, which is an individual, group, or process granted access to SQL Server.
For each security principal, you can grant rights that allow that principal to access or modify a set of the securables, which are the objects that make up the database and server environment. They can include anything from functions to database users to endpoints. SQL Server scopes the objects hierarchically at the server, database, and schema levels:
- Server-level securables include databases as well as objects such as logins, server roles, and availability groups.
- Database-level securables include schemas as well as objects such as database users, database roles, and full-text catalogs.
- Schema-level securables include objects such as tables, views, functions, and stored procedures.
SQL Server authentication approaches include:
- Authentication: Authentication is the SQL Server login process by which a principal requests access by submitting credentials that the server evaluates. Authentication establishes the identity of the user or process being authenticated. SQL Server authentication helps ensure that only authorized users with valid credentials can access the database server. SQL Server supports two authentication modes, Windows authentication mode and mixed mode.
- Windows authentication is often referred to as integrated security because this SQL Server security model is tightly integrated with Windows.
- Mixed mode supports authentication both by Windows and by SQL Server, using usernames and passwords.
- Authorization: Authorization is the process of determining which securable resources a principal can access and which operations are allowed for those resources. Microsoft SQL -based technologies support this principle by providing mechanisms to define granular object-level permissions and simplify the process by implementing role-based security. Granting permissions to roles rather than users simplifies security administration.
- It is a best practice to use server-level roles for managing server-level access and security, and database roles for managing database-level access.
- Role-based security provides the flexibility to define permissions at a high level of granularity in Microsoft SQL, thus greatly reducing the attack surface area of the database system.
Here are a few additional recommended best practices for SQL Server authentication:
- Use Windows authentication.
- Enables centralized management of SQL Server principals via Active Directory
- Uses Kerberos security protocol to authenticate users
- Supports integrated password policy enforcement including complexity validation for strong passwords, password expiration, and account lockout
- Use separate accounts to authenticate users and applications.
- Enables limiting the permissions granted to users and applications
- Reduces the risks of malicious activity such as SQL injection attacks
- Use contained database users.
- Isolates the user or application account to a single database
- Improves performance, as contained database users authenticate directly to the database without an extra network hop to the master database
- Supports both SQL Server and Azure SQL Database, as well as Azure SQL Data Warehouse
Database security is an important goal of any data management system. Each organization should have a data security policy, which is set of high-level guidelines determined by:
- User requirements
- Environmental aspects
- Internal regulations
- Government laws
Database security is based on three important constructs—confidentiality, integrity, and availability. The goal of database security is to protect your critical and confidential data from unauthorized access.
Manage and analyze humongous amounts of data with SQL Server 2019 Big Data Cluster
Thu, 07 May 2020 18:50:24 -0000|
Read Time: 0 minutes
A collection of facts and statistics for reference or analysis is called data, and, in a way, the term “big data” is a large sum of data. The big data concept has been around for many years, and the volume of data is growing like never, which is why data is a hugely valued asset in this connected world. Effective big data management enables an organization to locate valuable information with ease, regardless of how large or unstructured the data is. The data is collected from various sources including system logs, social media sites, and call detail records.
The four V's associated with big data are Volume, Variety, Velocity, and Veracity:
- Volume is about the size—how much data you have.
- Variety means that the data is very diﬀerent—that you have very diﬀerent types of data structures.
- Velocity is about the speed of how fast the data is getting to you.
- Veracity, the final V, is a diﬃcult one. The issue with big data is that it is very unreliable.
SQL Server Big Data Clusters make it easy to manage this complex assortment of data.
You can use SQL Server 2019 to create a secure, hybrid, machine learning architecture starting with preparing data, training a machine learning model, operationalizing your model, and using it for scoring. SQL Server Big Data Clusters make it easy to unite high-value relational data with high-volume big data.
Big Data Clusters bring together multiple instances of SQL Server with Spark and HDFS, making it much easier to unite relational and big data and use them in reports, predictive models, applications, and AI.
In addition, using PolyBase, you can connect to many different external data sources such as MongoDB, Oracle, Teradata, SAP HANA, and more. Hence, SQL Server 2019 Big Data Cluster is a scalable, performant, and maintainable SQL platform, data warehouse, data lake, and data science platform that doesn’t require compromising between cloud and on-premises. Components include:
The controller provides management and security for the cluster. It contains the control service, the configuration store, and other cluster-level services such as Kibana, Grafana, and Elastic Search.
The compute pool provides computational resources to the cluster. It contains nodes running SQL Server on Linux pods. The pods in the compute pool are divided into SQL compute instances for specific processing tasks.
The data pool is used for data persistence and caching. The data pool consists of one or more pods running SQL Server on Linux. It is used to ingest data from SQL queries or Spark jobs. SQL Server Big Data Cluster data marts are persisted in the data pool.
The storage pool consists of storage pool pods comprising SQL Server on Linux, Spark, and HDFS. All the storage nodes in a SQL Server Big Data Cluster are members of an HDFS cluster.
Following is the reference architecture of SQL Server 2019 on Big Data Cluster:
Big data analysis
Data analytics is the science of examining raw data to uncover underlying information. The primary goal is to ensure that the resulting information is of high data quality and accessible for business intelligence as well as big data analytics applications. Big Data Clusters make machine learning easier and more accurate by handling the four Vs of big data:
The impact of the Vs on analytics
How a Big Data Cluster helps
The greater the volume of data processed by a machine learning algorithm, the more accurate the predictions will be.
Increases the data volume available for AI by capturing data in scalable, inexpensive big data storage in HDFS and by integrating data from multiple sources using PolyBase connectors.
The greater the variety of different sources of data, the more accurate the predictions will be.
Increases the number of varieties of data available for AI by integrating multiple data sources through the PolyBase connectors.
Real-time predictions depend on up to-date data flowing quickly through the data processing pipelines.
Increases the velocity of data to enable AI by using elastic compute and caching to speed up queries.
Accurate machine learning depends on the quality of the data going into the model training.
Increases the veracity of data available for AI by sharing data without copying or moving data, which introduces data latency and data quality issues. SQL Server and Spark can both read and write into the same data files in HDFS.
Azure Data Studio is the tool that data engineers, data scientists, and DBAs use to manage databases and write queries. Cluster admins use the admin portal, which runs as a pod inside the same namespace as a whole cluster and provides information such as status of all pods and overall storage capacity.
Azure Data Studio is a cross-platform management tool for Microsoft databases. It’s like SQL Server Management Studio on top of the popular VS Code editor engine, a rich T-SQL editor with IntelliSense and plug-in support. Currently, it’s the easiest way to connect to the different SQL Server 2019 endpoints (SQL, HDFS, and Spark). To do so, you need to install Data Studio and the SQL Server 2019 extension.
If you have a Kubernetes infrastructure, you can deploy this with a single server cluster in single command and have a cluster in about 30 minutes.
If you want to install SQL Server 2019 Big Data Cluster on your on-premises Kubernetes cluster, you can find an official deployment guide for Big Data Clusters on Minikube in Microsoft docs.
Planning is everything and good planning will get a lot of problems out of the way, especially if you are thinking about streaming data and real-time analytics.
When it comes to technology, organizations have many different types of big data management solutions to choose from. Dell Technologies solutions for SQL Server help organizations achieve some of the key benefits of SQL Server 2019 Big Data Clusters:
- Insights to everyone: Access to management services, an admin portal, and integrated security in Azure Data Studio, which makes it easy to manage and create a unified development and administration experience for big data and SQL Server users
- Enriched data: Data using advanced analytics and artificial intelligence that’s built into the platform
- Overall data intelligence:
- Unified access to all data with unparalleled performance
- Easily and securely manage data (big/small)
- Build intelligent apps and AI with all data
- Management of any data, any size, anywhere: Simplified management and analysis through unified deployment, governance, and tooling
- Easy deployment and management of using Kubernetes-based big data solution built in to SQL Server
To make better decisions and to gain insights from data, large, small, and medium-size enterprises use big data analysis. For information about how the SQL solutions team at Dell help customers store, analyze, and protect data with Microsoft SQL Server 2019 on Big Data Cluster technologies, see the following links: