Microsoft is keeping up its push to provide businesses with hybrid cloud tools by offering a new feature that lets companies stretch their database tables from on-premises infrastructure to its Azure storage service. 

Microsoft is launching its SQL Server Stretch Database Service Wednesday, along with the first release candidate beta of SQL Server 2016. The new feature allows database administrators to set up certain tables to stretch from their on-premises infrastructure to Microsoft’s cloud, while still allowing applications to access all the data across both environments. 

When a table is set up to use SQL Stretch Database, administrators can specify a length of time after which data is automatically moved from their on-premises SQL Server instance to Azure. Applications querying that database table will be able to see both the data stored on-premises and the data stored in Azure.

To read this article in full or to leave a comment, please click here

Microsoft is keeping up its push to provide businesses with hybrid cloud tools by offering a new feature that lets companies stretch their database tables from on-premises infrastructure to its Azure storage service. 

Microsoft is launching its SQL Server Stretch Database Service Wednesday, along with the first release candidate beta of SQL Server 2016. The new feature allows database administrators to set up certain tables to stretch from their on-premises infrastructure to Microsoft’s cloud, while still allowing applications to access all the data across both environments. 

When a table is set up to use SQL Stretch Database, administrators can specify a length of time after which data is automatically moved from their on-premises SQL Server instance to Azure. Applications querying that database table will be able to see both the data stored on-premises and the data stored in Azure.

To read this article in full or to leave a comment, please click here

Theory 1 – no central authority

There is no central authority for news and information. Choice on the Internet and in TV programming, and the gathering of narrow ‘birds-of-a-feather’ groups around Facebook pages and Twitter feeds has created a cacophony that lacks a central ‘Athenian square’ where ideas are argued, facts are compared and opinions validated or discounted. Despite the great array of articulate criticism of Trump on conservative web sites such as Ricochet and The Federalist, no Trump supporters are reading it.

i.e., with apologies to WB Yeats, the centre cannot hold.

Not only is there no central authority in the world of political discourse, but the GOP has shown itself to be less than the sum of its parts, a weak collection of underwhelming operatives with little idea of how to proceed and absolutely no personal moral courage.

Theory 2 – the gamed moist robot theory

Scott Adams (Dilbert) has studied deal-making, perusasion and even hypnosis and written some interesting blog posts about it. Back in September he identified Trump as “the best persuader I have ever seen. On a scale from 1 to 10, if Steve Jobs was a 10, Trump is a 15.”

His analysis of Trump’s Super Tuesday is insightful and his reading list on persuasion is also worth checking out.

We’re happy to announce a new Basic tier for Azure Search. We received a lot of feedback about Azure Search not having an intermediate point between the Free and Standard tiers. This new Basic tier is targeted at addressing this gap.

Basic is great for cases where you need the production-class characteristics of Standard but have lower capacity requirements.

Basic costs $75/month (using US pricing as reference) and during Public Preview, we’re offering it at a 50% discount for $37.50/month per search unit. For more details and regional pricing please check out our pricing page.

Comparing free, basic and standard

Here’s a table that summarizes the key aspects of each service tier. In summary, you can think of Basic as a smaller version of Standard. Free is different in that it doesn’t ensure resource isolation, it doesn’t offer an SLA option and you can only have one per subscription. For these reasons, Free is not appropriate for production workloads.

Service tier

Free

Basic

Standard S1

Standard S2***

Availability SLA

No

Yes*

Yes*

Yes*

Max documents

10,000

1 million

180 million

(15 million/partition)

> 180 million

Max partitions

N/A

1

12

12

Max replicas

N/A

3

12

12

Max storage

50 MB

2 GB

300 GB

(25 GB/partition)

>300 GB per service

Max units per subscription

1

15**

15**

15**

* Minimum two replicas for read-SLA, three replicas for read-write-SLA
** Can be increased by calling Azure support
*** S2 can be provisioned by calling Azure support

Performance: What to expect

Given the variation in index schema, search queries and other options in Azure Search, there’s no such thing as general performance numbers. That said, here are some sample numbers for Basic from a test workload we use often: We used a nine-field index with a mix of searchable, filterable and facetable fields. Each document is around 1 KB in size.

We used a single Basic search unit and a good network connection. The results:

  • Bulk indexing: Our service was able to index ~15000 documents per minute in 1000-document batches. Queries will be slow if you push this hard, but you can get one million documents indexed in just a bit over an hour at this rate.
  • Search/Indexing mix: We loaded the index with slightly over half a million documents and ran searches and indexing at the same time. We ran one indexing request every couple of seconds with 10 changes each, simulating typical regular updates that happen to data in an app as it’s used rather than a batch update over the entire data set. Concurrently we ran searches that involved three facets, a filter and retrieving the top 10 matches. We used keywords picked at random using uniform distribution to ensure we hit cold parts of the index. Once warmed up, we achieved over five queries per second with ~200 ms latency for queries that result in few (100s of documents) matches and facets that are over fields with low to medium cardinality. If your result matches 10s of 1000s of documents, you can expect lower QPS and latencies that start increasing into the 300-400 milliseconds range. Queries matching a significant part of the entire dataset will see significant latency increase.

We found that you can get great performance from Basic search units as long as you operate them within parameters in line with their capacity.

There are a few things to watch for related to performance:

  • Avoid queries that match lots of documents, watch heavy indexing concurrent with search
  • Bring only the documents/fields you need (using $top and $select)
  • Reuse HTTP clients to avoid re-creating connections which cause extra latency.

Try it out!

For more information on Azure Search Basic and pricing, please visit our pricing page or click here to create your own Basic Search service.

Today I am pleased to announce general availability of the on-premises StorSimple Virtual Array, for all customers with an Enterprise Agreement for Microsoft Azure. We will be rolling out the virtual array to all regions in coming weeks.

StorSimple Virtual Array is a version of the solution available in a virtual machine form installed on your existing hypervisors. The virtual array is built on the success of previous StorSimple technology using a hybrid cloud storage approach for on-demand capacity scaling in the cloud and cloud-based data protection and disaster recovery.

The virtual array can be run as a virtual machine on your Hyper-V or VMware ESXi hypervisors and can be configured as a File Server (NAS) or as an iSCSI server (SAN). The hybrid approach stores the most used data (hottest) local on the virtual array and (optionally) tiers older stale data to Azure. The virtual array also provides the ability to back up the data to Azure (offsite) and enables a quick disaster recovery (DR) capability.

Each virtual array can manage up to 64 TB of data in the cloud. Virtual arrays, in different branch and remote offices across geographies, can be managed from a central StorSimple management portal in Azure.

Helsinki

Recommended workloads/scenarios

File Server

iSCSI Server

  • User file shares
  • Department file shares
  • Small SQL databases
  • User home folders1

1If you want to use features such as quota management, file screening etc. to manage home folders, you can connect a windows file server via iSCSI.

How to get started

  1. Navigate to the Azure portal.
  2. Click New.
  3. Select Data Services –> StorSimple Manager –> Quick Create.
  4. Provide a Name, then select Virtual Device Series under Managed Devices Type and Location for the StorSimple manager.

SSManager

Minimum configuration

CPU 4 cores
RAM   8 GB
Network 1 Virtual NIC
Virtual appliance OS disk 80 GB
Virtual appliance data disk 500 GB

 

More detailed instructions for getting started, using the virtual appliance and managing it, are available on the StorSimple documentation page.

One of the features in the recently released Update 2 for StorSimple was the integration of StorSimple with Azure Site Recovery. Using Azure Site Recovery, virtual machine replication, and StorSimple cloud snapshot capabilities, you can protect entire workloads hosted on your StorSimple hybrid storage array. In the event of a disruption, you can use a single click to bring up your application online in Azure in just a few minutes.

The idea is illustrated below:

StorSimple and Azure Site Recovery

Here, your file server or any workload using the StorSimple hybrid storage array is being protected by Azure Site Recovery. StorSimple snapshots are protecting the data in the storage array. If a disaster occurs, or even for development or test scenarios, you can move your workload to the cloud.

Azure Site Recovery provides planned, unplanned and test failovers. StorSimple now works with each of these scenarios. During a planned or unplanned failover, StorSimple volume containers fail over to the StorSimple cloud appliance. Your domain-joined users will continue to access the data/application. During a test failover, StorSimple volumes are cloned on the cloud appliance. You can run disaster recovery drills or development/test workloads, then bring down the setup with a few clicks after you’re done.

Find more information about how to set up this scenario here.

Applications and data are at the heart of how organizations drive competitive value and improve efficiency. However, this digital transformation is resulting in an explosion of data. Enterprises have to figure out how to get a handle on this data – how to increase their storage capacity and keep their data safe and secure, without drastically increasing IT costs.
 
Microsoft believes a hybrid cloud approach can offer unique ways to manage this data proliferation. We believe you should be able to take advantage of the best of the public cloud and the best of your on-premises technology. Hybrid solutions should enable mission critical, recent, or latency-sensitive data to remain on-premises, while backups and archival data can seamlessly move to low cost and nearly-limitless cloud storage. Applications and tools can access the data transparently, no matter where it is – so that it’s always available to you. And you can do it all without investing in new infrastructure, saving you time and money to focus on driving innovation.

Microsoft is investing in building hybrid capabilities across our product portfolio to help you take advantage of all that hybrid has to offer, simply and cost effectively. Today, we are extending that commitment with new offerings in SQL Server 2016 and StorSimple that make it even easier for you to leverage a hybrid cloud model to put you in control of how you store and protect your applications and data.

Leverage the infinite capacity of Azure with SQL Server Database updates

This week we are introducing the SQL Server 2016 Release Candidate with new hybrid enhancements available in preview. These capabilities make it easier than ever for you to choose whether you store your data on-premises or in the cloud. These new features integrate hybrid capabilities into the market-leading Microsoft data platform product you use today, empowering you to leverage the cloud to extend capacity for your massive data growth, while ensuring your data is protected.

SQL Server 2016 with SQL Server Stretch Database service, a new Azure companion service, enables you to dynamically stretch your on-premises warm and cold data to Azure for virtually endless compute capacity and storage. Now you can keep as much data as you need in the cloud, up to 60 terabytes per database in preview, without the high costs of traditional enterprise storage. The Stretch Database service makes remote query processing possible by providing compute and storage in a way that’s completely transparent to the application. SQL Server Stretch Database also works with Always Encrypted technology, which encrypts data before sending to Azure and the encryption key remains on-premises to give you added piece of mind that your data is protected no matter where it’s stored. SQL Server 2016 with the new Stretch Database service enable you to keep more data accessible for deep insights at significantly lower cost.

Another new hybrid capability available in SQL Server 2016 is support for Transactional Replication to Azure SQL Database which expands on the existing option for replicating data to SQL Server in an Azure virtual machine (VM). With this feature you can now replicate data directly to Azure SQL Database and benefit from a fully managed database. This extends the options you have to back up your data to the cloud to ensure it’s protected in worst-case scenarios. You can also migrate data from SQL Server on-premises to Azure SQL Database – providing a simple mechanism to move data to the cloud without downtime to an on-premises database.

Simplifying hybrid storage with Azure StorSimple Virtual Array

Azure StorSimple is another great example of how Microsoft has increased the hybrid capabilities of its products. Designed to help you increase storage capacity and data availability without investing in new infrastructure, StorSimple offers economical cloud storage or on-premises storage so you can choose where to store your data.

Today we are extending the StorSimple offering with StorSimple Virtual Array, a version of StorSimple offered in a virtual machine form, now generally available. The VM enables additional scenarios, in particular environments with minimal IT infrastructure and management, for customers to take advantage of StorSimple. The virtual array is built on the success of existing StorSimple technology, which uses a hybrid cloud storage approach for on-demand capacity scaling in the cloud and cloud-based data protection and disaster recovery. The hybrid approach centers around your choice to store the most used data on the virtual array and optionally tier older data to Azure. The virtual array can be run as a virtual machine on your Hyper-V or VMware ESXi hypervisors and can be configured as a File Server (NAS) or as an iSCSI server. It also provides the ability to back up your data to Azure.

Both SQL Server 2016 and StorSimple enhancements are available for you to try out today. We hope that you’ll test drive these exciting new offerings and let us know what you think.

NTT Communications (NTT Com), NTT Group’s information and communications technology (ICT) and international communications business, is enhancing its Enterprise Cloud offering in an effort to help customers achieve digital transformation.

NTT Com says enterprises are struggling with the complex requirements of digital transformation, the application of digital technology across multiple aspects of their business. First, they’re seeking to migrate traditional ICT (CRM, ERP, SCM, etc.) to the cloud to achieve more effective ICT operations and cost optimization. Second, tectonic shifts in business driven by mobile, social, big data, the Internet of Things (IoT) and other digital technologies is driving them to deploy cloud-native applications and shift toward a DevOps culture.

To read this article in full or to leave a comment, please click here

Did you know an attacker can be present on a network for more than 200 days before being detected? Imagine the damage that can be done to an organization during that time: Accessing sensitive data about your company, products, employees and clients. Altering the operating system on every computer in your network. Causing irreparable damage to your company – both in terms of dollars and damaged reputation – before you even know he’s there.

Hackers typically gain control of a network using a privileged account (e.g., domain admin) within 24-48 hours of initiating the attack. They move silently through the network, avoiding actions that would alert IT to their presence. If they are discovered, it’s usually by chance or an external notification.

Do you know how a security breach actually happens? How hackers get a foothold, and what they do once they’re in? Most importantly: Are you as secure as you think you are? Join me for a free webinar to learn how a breach actually happens and the steps you can take to help prevent an attack, including:

  • Common ways hackers get into your network, including phishing scams and targeted search results
  • How hackers set up and manage long-term attacks
  • Things you can do today to help prevent an attack
  • The key response phases, including incident response, tactical recovery and strategic recovery
  • Tips for developing an effective communications plan that won’t compromise your data’s security

I hope you will join me and my colleague Jim Moeller, from the Enterprise Cybersecurity Group, to learn more about this critical topic. The “Anatomy of a Breach: How Hackers Break In” webinar will be held March 8, 2016. Registration is now open.

The Only Influence Marketing Workbook You'll Ever Need

Influence marketing keeps evolving at a rapid pace. That’s why I created this workbook based on my own influence marketing planning process.

Use this workbook to identify the right influencers for your next project. You can also use it as a template for any presentations you need to do for teams or clients. The workbook comes in PowerPoint format and is fully customizable, so that when you fill in the information about your influence marketing strategy, you get all the credit!

This workbook will help you identify and power:

  • Who are your influencer personas?
  • How to build an ongoing influence marketing strategy.
  • Which metrics to track and measure to calculate success.
  • Plan for an entire year of influencer-powered marketing.

Download the workbook for free, and go get your planning started!

GroupHigh Workbook

Get more content like this, plus the very BEST marketing education, totally free. Get our Definitive email newsletter.