Dell Cloud Blog - Cloud - Dell Community

Dell Cloud Blog

  • Your Microsoft Azure Stack sandbox

    Over the last few week since the Azure Stack Airlift for EAI customers, we had the opportunity to listen to Service Providers from around the world and discuss our platform for Azure Stack, including our solutions for Data protection and Security. While Azure stack cannot come any sooner, there was universal recognition that we need not wait. With the single node Dell EMC developer edition, you have the opportunity to get started today.

    With a continuous stream of updates to both the experience as well as Azure Services delivered since TP2, there is an opportunity to get started today. 
     
    The single node enables you to setup evaluation units:
    1. As a MSP/SP, you can start to develop your Azure Stack business model (packaging and pricing). Build and evaluate plans and offers. At Dell EMC, we offer a flexible consumption model for the infrastructure to enhance the Microsoft Azure Stack business model.
    2. Evaluate PaaS with Azure App Service, Azure functions and eventually Blockchain and more
    3. How to extend Azure on premises to augment your Azure Strategy
    a. Look at edge and disconnected solutions to address a growing customer need
    b. Modernize applications and start to develop on-prem
    4. Integrate into your infrastructure with Azure AD and ADFS scenarios
     
    This means, developers or evaluators can immediately access their own Azure Stack portal, access pre-created plans and offers, start to develop use cases around modernizing applications, exploring the consistency with Azure and how to architect and develop hybrid apps.

    And yes, you can do all this on a single node for less that $10,000 and you are not paying for any of the Azure Services consumed on this box.

    Below is what our single node edition for developers and evaluators :

    Additionally, talk to our Customer Solution Centers to schedule a demo. Reach us at mscloud_sales@dell.com
     
     
    What is coming next? We will continue to work with our early adopters as well as early customers to ensure you are ready on Day 1 to adopt and derive value from your implementation for Azure Stack. The developer edition is a key step in that direction and will get a GA code update and continue to be available beyond GA. We will also continue to engage you through our Dell Customer Solution Centers  and partnering with Microsoft Technology Centers to deliver PoCs, webinars and workshops to keep you educated and updated on Azure Stack. Also, for any questions, please email mscloud_sales@dell.com for more information.
     

    Next Steps:
    Get the Dell EMC developer node (PE R630): mailto:mscloud_sales@dell.com
    Get TP3 code: https://azure.microsoft.com/en-us/overview/azure-stack/try/
    Azure Stack documentation: https://docs.microsoft.com/en-us/azure/azure-stack/


     

    Other useful links
    TP3 Blog: https://azure.microsoft.com/en-us/blog/hybrid-application-innovation-with-azure-and-azure-stack/
    Business model: https://channel9.msdn.com/Blogs/azurestack/Azure-Stack-extends-cloud-economic-model-on-premises-with-pay-as-you-use-pricing

  • ARM Templates and Source Control in Azure Stack - Part 1 of 2

    Since Microsoft announced Azure Stack, excitement and interest has continued to grow among customers seeking to implement or augment a cloud computing operating model and business strategy.

    To meet the demand for consultancy expertise, design sessions and proofs of concept at the Dell Technologies Customer Solution Centers have helped facilitate and accelerate the testing of Technical Preview 2. One of the first questions we always receive is “Where do we start so we can make informed decisions in the future?”

    Based on our in-depth experience with Microsoft Azure Stack and our rich heritage with Fast Track Reference Architectures, CPS-Premium, and CPS-Standard based on the Windows Azure Pack and System Center, we believe that Azure Resource Manager (ARM) is the perfect place to start. The ARM model is essential to service deployment and management for both Azure and Azure Stack.  Using the ARM model, resources are organized into resource groups and deployed with JSON templates.

    Since ARM templates are declarative files written in JSON, they are best created and shared within the context of a well-defined source control process.  We solution architects at the Solution Centers decided to define what this ARM template creation and source control process might look like within our globally distributed ecosystem of labs and data centers. We have shared some of our learning journey in two blog posts to help facilitate knowledge transfer and preparation.

    Hopefully, these blogs will prove to be thought provoking, especially for IT professionals with limited exposure to infrastructure as code or source control processes.  Though we have depicted a manual source control process, we hope to evolve to a fully automated model using DevOps principles in the near future.  

    Filling the Toolbox

    Our first step was to choose the right tools for the job.  We agreed upon the following elements for creating and sharing our ARM templates:

    • Microsoft Visual Studio 2015 Community or Enterprise Editions - The flagship Microsoft IDE with fantastic features like a JSON Editor, seamless Azure SDK integration, and an extension for GitHub.  VS 2015 Enterprise Edition can be purchased or downloaded using a Visual Studio Subscription.  Community Edition can be downloaded for free.

    https://www.visualstudio.com/vs/community/

    • GitHub account for creating and accessing public repositories - There are many great options for hosting development projects online including Microsoft Visual Studio Team Foundation Server.  GitHub seemed like an excellent choice for getting started with source control.  To get started with GitHub, GitHub Guides is the perfect place to start.

    https://guides.github.com/

    • Azure subscription credentials - Needed to deploy our ARM templates to Microsoft Azure. 

    https://azure.microsoft.com/en-us/free/

    • Azure Stack TP2 1-Node POC environment - Needed to deploy our ARM template to an Azure-inspired instance in our labs and data centers.

    https://azure.microsoft.com/en-us/overview/azure-stack/try/

    • Azure Stack Tools downloaded from GitHub - A set of PowerShell modules and scripts that proved to be invaluable in working with Azure Stack TP2.

    https://github.com/Azure/AzureStack-Tools

    We used the Dell EMC recommended server for an Azure Stack TP2 1-Node POC environment which is the PowerEdge R630. With 2U performance packed into a compact 1U chassis, the PowerEdge R630 two-socket rack server delivers uncompromising density and productivity.  The recommended BOM can be found here:

    http://en.community.dell.com/techcenter/cloud/b/dell-cloud-blog/archive/2016/09/30/get-started-today-with-tp2-and-get-ready-for-azure-stack

    Process Overview

    After a few lively discussions about the high level process for creating and sharing our ARM templates, we felt that the following guideposts were a good place to start: 

    • First and foremost, carefully consider what problem we are trying to solve with the environment that will be deployed with the ARM template.  This is all about gathering requirements.  What type of performance will our application require?  Do we need to keep our application in close proximity to our data?  Do we need multiple types of environments like Test/DEV and Production? 
    • Craft a Visio diagram that depicts the application architecture.  We did a little digging and found a lot of great guidance on this subject.  Obviously, finding a link to the Visio Stencils themselves was key, but we also found a great site that provided examples of what the diagrams might look like.  We have provided a couple of these sites here:

    https://blogs.technet.microsoft.com/keithmayer/2014/10/06/tools-for-building-professional-cloud-architecture-diagrams/

    https://azure.microsoft.com/en-us/solutions/architecture/#drawing-symbol-and-icon-sets

    • Consult the AzureStack-QuickStart-Templates GitHub repositories for an appropriate solution.

    https://github.com/Azure/AzureStack-QuickStart-Templates

    • Revise and test the ARM template to meet the requirements.  
    • Create the appropriate documentation for the template.

    • Inform other solution architects of the new template and invite them to deploy as-is or to make improvements using the recommended procedures outlined in these blogs.

    Throughout the blog posts, original creators of templates will be referred to as the authors, and the contributors making enhancements to the templates will be referred to as collaborators.

    Selecting Our First QuickStart ARM Template

    One of the most powerful aspects of Microsoft Azure Stack is that it runs the same APIs as Microsoft Azure but on a customer's premises. Because of this, service providers are able to use the same ARM templates to deploy a service to both Azure Stack and Azure without modifications to the template. Only templates contained in the AzureStack-QuickStart-Templates GitHub repository have been created to deploy successfully to both Azure Stack TP2 and Azure. At this time, the templates in the Azure-QuickStark-Templates GitHub repository won't necessarily work with Azure Stack TP2 because not all Resource Providers (RP) from Azure are currently available on Azure Stack. 

    For our first ARM template deployment in the Solution Center, we decided to keep it simple and select the non-HA Sharepoint template from AzureStack-QuickStart-Templates.  The diagram that follows depicts the end state of our test Sharepoint farm.  Sharepoint is a great example of an application that can be hosted by service providers and made available under a SaaS business model using Azure Stack. This gives customers a way to immediately consume all the collaboration features of Sharepoint from their trusted service provider without investing a great deal of time and effort to deploy it themselves. Other applications that service providers have hosted for customers include Microsoft Exchange and Microsoft Dynamics.

     

    The specific template we selected was sharepoint-2013-non-ha.

    Creating an Azure Resource Group Using Visual Studio

    Once we selected which ARM template from GitHub we wanted to use for our first deployment, we needed to create a new Azure Resource Group project in Visual Studio on the author's laptop as depicted in the following screen print.

    When creating the Resource Group, we checked the “Create directory for solution” and “Create new Git repository".  By creating a Git repository, authors are able to use version control while they are beginning work with the new ARM template on their local machine.  There are many great references online for getting started with Git in Visual Studio.  A few of us thought the following article was a good starting point:

    https://msdn.microsoft.com/en-us/magazine/mt767697.aspx

    We named the repository and associated directory chihostsp (short for "Chicago Hosted Sharepoint" since the author was in the Chicago Solution Center), which is the name that was also used for the Resource Group when it was deployed.  Then, we selected Azure Stack QuickStart from the Show templates from this location drop down. This exposed all the individual ARM templates in the AzureStack-QuickStart-Templates repository on GitHub. We selected sharepoint-2013-non-ha.

    The content of the ARM template then appeared in Visual Studio.

    We learned that it is important to refrain from modifying the structure of this template within the downloaded folder when just getting started – especially the Deploy-AzureResourceGroup PowerShell Script. This script deploys the newly designed Resource Group regardless of whether an Azure or Azure Stack Subscription is used. To ensure the success of a Resource Group deployment using this PowerShell script in Visual Studio, we added the necessary accounts and subscriptions into Visual Studio Cloud Explorer.  When properly configured, Cloud Explorer should look something like the following (we've blacked out part of our domain suffix in some of the screen prints going forward):

    We found it desirable to test this new template from the author's machine before making any changes. To do this, we deployed the chihostsp Resource Group through Visual Studio to an Azure Stack TP2 Tenant Subscription. In the case of our lab environment, the name of the Azure Stack TP2 Tenant Subscription was FirstSubscription. We supplied the appropriate parameters when prompted. We found that many deployment failures that can occur were attributed to incorrect naming conventions on these parameters. We made sure to understand the legal resource naming conventions to ensure a successful deployment.

    The sharepoint-2013-non-ha template included logic for working with the Azure Stack Key Vault.  For an introduction to the Azure Stack Key Vault, please see the following in the Azure Stack documentation:

    https://docs.microsoft.com/en-us/azure/azure-stack/azure-stack-kv-intro

    When editing the following parameters, we clicked on the key vault icon for adminPassword in order to select the appropriate Key Vault for the FirstSubscription Tenant Subscription and supplied the Key Vault Secret.

    After a successful deployment of the chihostsp Resource Group, the following was displayed in the Azure Stack Tenant Portal. (Note: Not all the resources deployed into the Resource Group are visible here).

    We also made sure that the chihostsp Resource Group could also be successfully deployed to Azure.  Here is the final result of that deployment in the Azure Portal:

    Committing a Change to the Local Git Repository

    Since the azuredeploy.parameters.json file changed as part of deploying the chihostsp Resource Group, the author needed to commit the changes in Visual Studio to Git for proper version control. We definitely learned the importance of committing often to the local Git repository with any new project being developed in Visual Studio.  The next few screen prints from Visual Studio illustrate the process.

    Final Thoughts

    In the next blog post, we will show how the author shared the new chihostsp project using GitHub and proceeded to work with other solution architect collaborators.  Here is a link to Blog post 2 for convenience:

    http://en.community.dell.com/techcenter/cloud/b/dell-cloud-blog/archive/2016/12/06/arm-templates-and-source-control-in-azure-stack-part-2-of-2

  • ARM Templates and Source Control in Azure Stack - Part 2 of 2

    In the previous blog post, ARM Templates and Source Control in Azure Stack - Part 1 of 2 (http://en.community.dell.com/techcenter/cloud/b/dell-cloud-blog/archive/2016/12/05/arm-templates-and-source-control-in-azure-stack), we described a process developed by the solution architects at the Dell Technologies Customer Solution Centers for the creation and sharing of ARM templates in our labs.  As a review, these are the steps that we took so far:

    1.  We created a new Azure Resource Group project in Microsoft Visual Studio 2015 named chihostsp, which was based on the sharepoint-2013-non-ha template found in the AzureStack-QuickStart-Template repository in GitHub.

    2.  We successfully performed a test deployment of chihostsp to both Azure and Azure Stack TP2. 

    3.  Since the deployment itself made changes to the azuredeploy.parameters.json file in the project, we committed those changes to the local GIt repository on the author's machine.

    Defining the ARM Template Collaboration Process

    It was then time to share the chihostsp project with the rest of the team so they could make improvements on the base sharepoint-2013-non-ha template.  We reviewed the GitHub Guides site (link provided in the previous blog post) and discussed various methods of collaborating on development projects.  There were some strong opinions about whether to use forking or branching to keep track of bug and feature changes.  After much spirited conversation, we determined that we needed to keep this part of the process extremely simple until we gain more experience.  Otherwise, we knew that some of our colleagues would bypass the process due to perceived complexity.  We also considered that many of the predefined ARM templates found in the AzureStack-QuickStart-Templates GitHub repository would probably not require complex modifications - at least for our educational purposes. 

    Here are the key principles we defined for sharing ARM templates and collaborating among our team members: 

    • The ARM template author publishes their local Visual Studio project to GitHub and provides collaborators permission to modify the contents of the GitHub repository.
    • Collaborators clone this GitHub repository and make descriptive changes and commits to their local master branches.
    • The author and all collaborators regularly push their local changes to the GitHub master branch. All solution architects working on the project perform regular pull synchronizations to ensure they are working on the latest version of the project on their local machines.
    • The author is responsible for ensuring that project versions that have been reviewed and successfully deployed to both Azure and Azure Stack are identified in GitHub using tagging, which are also known as release points.

    Project Author Publishes to GitHub

    The first step in the iterative process of collaborating as a team on our chihostsp project was for the author to publish the project to a GitHub repository.  We used Visual Studio to accomplish this by browsing to the home page of Team Explorer with the project open and clicking Sync.

    Then, we needed to make sure the correct information appeared under Publish to GitHub as in the following screen print:

    A remote could then be seen for the project under Team Explorer --> Repository Settings.

    The author then browsed to GitHub to invite other solution architects to begin collaborating on the chihostsp project. This was accomplished by browsing to Settings --> Collaborators in the chihostsp repository and adding the collaborator's names and email addresses.  Once saved, an automatic email was sent to all the collaborators.

    Project Collaborators Begin Participating

    The solution architect assuming the role of a collaborator automatically received an email that provided a link to the chihostsp repository in GitHub.  After logging in to GitHub using their own account credentials, the collaborator accepted the invitation sent by the author in the last step.  Then, the collaborator cloned the repository to their local machine to begin making modifications by clicking on Clone or Download, and selecting Open in Visual Studio.

    Visual Studio automatically opened to the place where the solutions architect collaborator chose to clone the repository on their local machine. Then, they opened the newly cloned repository.

    For simplicity, we decided to only make a couple small cosmetic changes to the azuredeploy.json file in the chihostsp repository to practice our newly developed source control process. The status bar was invaluable in keeping track of the number of local changes to our project’s files, showing how many committed changes still needed to be synced to the local master branch, and indicating which branch was currently being modified. In the case of the process outlined in this blog, only the master branch is ever modified. We referred to Team Explorer and the Status Bar often to track our work.

    We only made a change to the defaultValue value under the adminUsername parameter from lcladmin to masadmin. Also, the description under the domainName parameter was modified to read "The Active Directory domain name." Then, these changes were committed to the local master branch. Since concepts like forking and branching were not employed in our simple process and pushes were allowed directly to the remote master branch, we all agreed that the descriptions of all commits need to be very clear and verbose.

    After making these changes to the azuredeploy.json file, the modifications needed to be shared with the team. The collaborator did this by initiating a Push in Team Explorer’s Synchronization view.

    Browsing to the GitHub website, we verified that the commit had successfully made it to our central repository.

    The author and all contributors to the project then needed to perform a Pull Synchronization so that the changes were reflected in their local master branches. We now felt confident that we all could continue to push and pull changes to and from the remote master branch at a regular cadence throughout the duration of the project.

    Identify Releases Using Tagging

    The final step in our process requires the author of our chihostsp project to identify the version of code that is guaranteed to deploy successfully. This is accomplished using a concept known as tagging.  The author used their local instance of Visual Studio to tag versions of the project that have deployed successfully to Azure and Azure Stack during testing.  We set the tags using the Local History view in Visual Studio, which can be launched from Solution Explorer or Team Explorer.

    At the time of this writing, Visual Studio 2015 did not have the ability to push or pull tags through its interface to the remote GitHub repository. To ensure that tags were being pushed and pulled successfully, the project author and all collaborators needed to run a special command at the command line. In this case, all project participants had to change to the local chihostsp directory at the command line on their local machines. Using a Git command line editor (posh-git was used here), we all executed the appropriate commands to push and/or pull the tags.

    • To push: git push –tags
    • To pull: git pull –tags

    When a push was performed, the tag would appear in the remote repository in GitHub.

     

    Final Thoughts

    We hope these two blog posts provided some valuable insight into where IT infrastructure professionals might begin their Microsoft Azure Stack learning journey.  The creation and version controlling of Azure Resource Manager templates is a cornerstone element of anyone looking to leverage Azure and Azure Stack in their organizations.  We hope this information is helpful as you undertake your digital transformation journey to a hybrid cloud model, and we look forward to sharing more discoveries and insights as Microsoft Azure Stack matures and becomes available for public release.   

  • Dell EMC Continues to Lead at OpenStack Summit

     

    OpenStack Summit Barcelona is in full swing this week.  Barcelona, of course, is the capital city of Catalonia, which is an autonomous community within Spain. Perfectly fitting for OpenStack, which is an autonomous community in its own right.

    Since the early days of OpenStack, Dell, EMC – and now Dell EMC – have had strong representation at OpenStack Summit and this week is no exception.

     

    Showcasing OpenStack Solutions

    First off, at the Dell EMC booth, we are showcasing several key elements of what we affectionately, and descriptively, call our build to buy continuum for OpenStack cloud solutions. 

    At the “build” end of the spectrum, the Dell EMC Red Hat OpenStack Cloud Solution is a co-engineered and fully validated reference architecture that is jointly designed, tested, and supported by Dell EMC and Red Hat.  While it can be described as a reference architecture, it is much more than that, as it is a bundled solution that can be ordered, delivered, and deployed as a complete and validated solution based on purpose-selected Dell EMC hardware components and Red Hat Enterprise Linux and OpenStack Platform software components.  Yet as a reference architecture, it has the inherent flexibility to adapt and expand based on customer needs.  You can learn more about this solution in the OpenStack community area of Dell TechCenter and at Dell.com/OpenStack.

    The latest version, just announced, is the Dell EMC Red Hat OpenStack Cloud Solution Version 6, which is based on the OpenStack Mitaka release and Red Hat OpenStack Platform 9 (OSP 9). Read more about this release on the recent Dell4Enterprise blog post entitled Delivering Enterprise Capable OpenStack Clouds. This release includes important extensions for Red Hat OpenShift to enable Platform as a Service (PaaS), including support for Docker containers and Kubernetes orchestration, and for CloudForms to enable unified multi-cloud management and automation.

    Now at the “buy” end of the spectrum, the VxRack System 1000 with Neutrino Nodes (or VxRack Neutrino for short) is a turnkey, rack-scale, engineered system for Infrastructure as a Service (IaaS), purpose built for cloud native applications. VxRack Neutrino is a pre-configured, hyper-converged system with enterprise-grade management and reporting software that can be deployed as a stable and reliable OpenStack environment within hours.  And VxRack Neutrino can be optionally configured with Dell EMC Native Hybrid Cloud for a Pivotal Cloud Foundry implementation for cloud-native application development.

    Both of these OpenStack solutions from Dell EMC are enterprise-grade systems that represent the best characteristics of reliability and scalability and provide a wide range of choice from the highly flexible to the fully turn-key.

     

    OpenStack Summit Technical Sessions

    If all that is not enough, Dell EMC has a strong presence in the OpenStack Summit technical sessions.  Here is a sample of the sessions that we are leading or in which we are participating:

    The Murano application catalog helps OpenStack cloud admins to publish a well-tested set of on-demand and self-service applications for end users to easily access and deploy. This session demonstrates a simple end-to-end process for getting Murano running on your dev environment or your laptop by running Murano in Docker containers. This talk is led by Magdy Salem, Lida He, and Julio Colon of Dell EMC.

    This talk, led by David Paterson of Dell EMC and Daniel Mellado of Red Hat, explains how to automate the validation testing of an OpenStack deployment, by outlining and demonstrating the entire process for automating a detailed test scenario.

    Led by Arkady Kanevsky and Nicholas Wakou of Dell EMC, learn how Hadoop can be deployed on an OpenStack cloud using the OpenStack Sahara project, and how OpenStack can be optimized to get the performance of a Big Data workload on the Cloud to match that of a bare-metal configuration.

    During OpenStack Newton, the Cinder team made progress in numerous areas for block storage.  In this session, we will provide an update on what has been accomplished in Newton and also discuss what may be coming in the Ocata release.  This talk is led on Sean McGinnis and Xing Yang of Dell EMC.

    Based on leading-edge standards work and methodologies developed by Dell EMC and Red Hat, this session will explain how to use industry-standard techniques to measure and evaluate the performance of your OpenStack cloud with real-world workloads. This talk is led by Nicholas Wakou of Dell EMC and Alex Krzos of Red Hat.

    This innovative session will explore how TensorFlow, a new open source platform for artificial intelligence by the Google Brain team, can be used with OpenStack Sahara for data processing.  This talk is led by Lin Peng and Accela Zhao of Dell EMC.

    This session looks at Neutron networking challenges and effective troubleshooting tips for common issues such as connectivity issues among instances, connecting to instances on the associated floating IP, SSH working only part of the time, and application payload transfer issues. This talk is led by Mohammad Itani, Lida He, and Diego Casati of Dell EMC.

    The Rally project gives OpenStack developers and operators relevant and repeatable benchmarking data to determine how their clouds operate at scale. In this session, learn how to deploy Rally, run benchmark scenarios, and analyze the results of those tests. This talk is led by Magdy Salem, Javier Soriano, and Mohammad Itani of Dell EMC.

    This is a panel discussion on the pros and cons of various OpenStack consumption models, led by IDC Research Director Ashish Nadkarni, and featuring VS Joshi of Dell EMC and representatives from Mirantis and Canonical.

     

    Wow, did we say strong representation?  If you miss these sessions or are not fortunate enough to go to Barcelona, you can view all of the sessions at openstack.org/videos.

       

  • Get started today with TP2 and get ready for Azure Stack

    At this week’s ignite, you must have heard about the availability of Technical Preview 2 (TP2) for Azure Stack. We are excited at Dell EMC and encourage our customers to deploy TP2 as one of the key first steps towards adopting Azure Stack before it is available at GA.

    At this year’s worldwide partner conference, Microsoft announced our partnership delivering Azure Stack as an Integrated System targeting availability of mid-2017. For Dell EMC, this is a continuation of our joint development partnership with Microsoft going back to the early days developing Cloud Platform Systems with a focus on delivering integrated Systems for Hybrid Cloud solutions to our customers. Azure Stack is the next phase of our partnership.

    TP2 helps you prepare your infrastructure, operations and your application teams to hit the ground running when we bring you the Integrated System for Azure Stack at GA. Key areas to start planning include (and not limited to):

    1. Application needs

      1. IaaS and PaaS capabilities within TP2 and leading into GA

      2. Capacity and Performance needs

      3. Scenarios: PoC\DevTest and Production

      4. DevOps practices and infrastructure

    2. Infrastructure needs

      1. Identity and Access for Admins and Tenants: Azure AD or ADFS

      2. Network integration into your existing border devices

      3. ITSM

      4. Organization Security posture

      5. People and process

    3. Deployment scenarios

      1. Azure connected or Island

      2. Single or Multi-Region

      3. Capacity and Scale Units

    You can confidently begin your journey knowing that at GA we will bring you an Integrated System to deliver Compute, Storage, Networking and the Azure Stack software pre-configured and fully supported to meet your application and infrastructure needs.

    With TP2, customers can deploy a single node Azure Stack system and explore user experiences around (and not limited to):

    1. Create plans and offers around key Azure Services

    2. Run cloud native workloads such as 3 tier app from ARM from

    3. Build a portfolio of cloud service using gallery items

    Our recommended TP2 System is an R630 designed to be cost effective as a Proof of Concept (PoC) node and enables you to test and develop PoCs for your use cases. This system non-resilient and is designed to only run as a single node PoC system. Multi-node configurations will be available at GA and this system will not be upgradable to multi-node configurations. This TP2 configuration is designed with the goal to continue to deliver updates to a single-node PoC System including TP3 and into GA targeted as PoC and Dev Test use cases. 

    PoC System Specification:

    R630 (2.5" Chassis)

    Chassis Configuration

    10 x 2.5" Chassis

    Processor Configuration:

    Default: Intel® Xeon® E5-2630 v4 2.2GHz,25M Cache,8.0 GT/s QPI,Turbo,HT,10C/20T (85W)
    Option 1: Intel® Xeon® E5-2640 v4 2.4GHz,25M Cache,8.0GT/s QPI,Turbo,HT,10C/20T (90W)
    Option 2: Intel® Xeon® E5-2650 v4 2.2GHz,30M Cache,9.60GT/s QPI,Turbo,HT,12C/24T (105W)

    Memory:

    Default: 128GB (8 x 16GB RDIMM, 2400Mhz)Option:256 GB (8 x 32 GB RDIMM, 2400Mhz)

    StorageController

    Internal:HBA330

    Storage -OS Boot:

    1 x 400GB Solid State Drive SATA Mix Use MLC 6Gpbs 2.5in Hot-plug Drive, S3610

    Storage - Cache (SSD):

    Default: 2 x 200GB Solid State Drive SATA Write Intensive 6Gbps 2.5in Hot-plug Drive, S3710

    Storage –Data (HDD):

    Default: 6x 1TB 7.2K RPM SATA 6Gbps 2.5in Hot-plug Hard Drive

    NetworkCards:

    NDC:Intel X520 DP 10Gb + Intel i350 DP 1 GbE

    Power Supply:

    Redundant750 Watts

    Contact one of our Cloud Specialists to order the system and reach us as mscloud_sales@dell.com. If you would like to see a demo of TP2, contact your Dell Account rep or Channel Partner Rep to engage the Dell EMC Customer Solution Center.

    Links to for more information on TP2:

    Microsoft TP2 Announce

    Microsoft Azure Stack PoC architecture

    TP2 deployment pre-requisites

    Download TP2

    Disclaimer: Dell does not offer support for Azure Stack TP2 at this time. Dell is actively testing and working closely with Microsoft on Azure Stack, but since it is still in development, the exact hardware components/configurations that Dell will fully support are still being determined.  The information divulged in our online documents prior to Dell launching and shipping Azure Stack may not directly reflect Dell supported product offerings with the final release of Azure Stack. We are, however, very interested in your results/feedback/suggestions! Please leave comments below.

  • Get started and streamline your journey to cloud with Hybrid Cloud System for Microsoft

    At this year’s worldwide partner conference, Microsoft announced our partnership delivering Azure Stack as an Integrated System targeting availability of mid-2017. For Dell, this is a continuation of the partnership and joint development with Microsoft going back to the early days developing Cloud Platform Systems with a focus on delivering integrated Systems for Hybrid Cloud solutions to our customers. Azure Stack is the next phase of that partnership.

    Another key highlight from the Microsoft announcement was Microsoft’s recommendation to adopt Cloud Platform Systems (CPS) today to move forward in the cloud journey.

     

    “Customers ready to deploy an Azure-consistent cloud on their premises now should buy Microsoft Cloud Platform Solution (CPS). Customers will be able to use Azure Stack to manage CPS resources thereby preserving investments in CPS.”Mike Neil, Corporate Vice President, Enterprise Cloud, Microsoft Corporation

     

    The Dell Hybrid Cloud System for Microsoft CPS Standard (DHCS), our integrated system for CPS hybrid cloud, is a great way to get started and streamline your journey to cloud. The early steps to a hybrid cloud for customers are usually evolutionary, but still impact applications, security, infrastructure, and the operating model. The path to Azure Stack is in steps even further along that journey; so getting started with DHCS today helps with three key areas:

    1. Rationalizing applications
    2. Adopting the cloud model
    3. Getting started with hybrid

     

    So for example (area #1), most applications today are virtualized, either traditional enterprise (Exchange, SharePoint) or so-called “n-tier” such as web or databases. As a first step, you inventory the applications and assess your options based on classification of applications and data, cost, security, and so forth. By the end of this step, you have identified the applications to rehost as IaaS virtual machines (VMs) in a cloud infrastructure like DHCS, and a migration plan for the applications and data. Eventually as you re-tool yourself to rebuild some of your existing applications as cloud native or develop new cloud native applications, Azure Stack will provide you with a platform to develop and deliver them on premises. With our integrated system for Azure Stack, you can continue to run your traditional applications on DHCS while managing them from Azure Stack as IaaS VMs without having to migrate or upgrade your current investments.

     

    Area #2 is the part of your journey towards cloud having to do with adopting the cloud model. Orienting your business and operating model towards service delivery and consumption is key to getting most from cloud, and takes time and experience to achieve. Adopting multi-tenancy, self-service, metering and governance are critical first steps towards being truly cloud native. With a consumption model, you are now able to increase utilization and gain control of your resources and reduce cost risk while rapidly delivering services that your tenants need. DHCS comes ready to enable adoption of the cloud model on premises, on a software-defined infrastructure that is familiar and proven in the market today.

     

    Hybrid adoption is the final area most customers struggle with. We have identified two main hybrid use cases to get started that bring value to customers today, and integrated them out-of-the-box into DHCS. With the Backup and Site Recovery services from the Microsoft Azure public cloud, you not only get integration into Azure, but also the ability to efficiently implement a business-continuity strategy with zero CAPEX and with OPEX based on consumption for your on-premises cloud.

     

    With the Dell Hybrid Cloud System for Microsoft, you get a platform ready to rehost your applications today and deliver them as services to your tenants, enabling self-service, metering, and governance. You also get the option to consume Microsoft Azure services like Backup and Site Recovery out of the box. DHCS is built on a familiar and proven technology stack with Windows Server, System Center and Windows Azure Pack, enabling you to focus less on the workings of the technology and more on areas that transform your business as you continue to take advantage of cloud.

     

    Whether you choose to rehost the applications and adopt IaaS with DHCS or eventually re-factor applications to leverage any of the Azure platform-as-a-service capabilities, Dell will partner with you along this journey and protect your investments as you adopt DHCS today and plan for Microsoft Azure Stack tomorrow.

  • SPEC Cloud IaaS Benchmarking: Dell Leads the Way

    By Nicholas Wakou, Principal Performance Engineer, Dell

     

      

    Computer benchmarking, the practice of measuring and assessing the relative performance of a system, has been around almost since the advent of computing.  Indeed, one might say that the general concept of benchmarking has been around for over two millennia.  As Sun Tzu wrote on military strategy:

    "The ways of the military are five: measurement, assessment, calculation, comparison, and victory.  If you know the enemy and know yourself, you need not fear the result of a hundred battles.”
     

    The SPEC Cloud™ IaaS 2016 Benchmark is the first specification by a major industry-standards performance consortium that defines how the performance of cloud computing can be measured and evaluated. The use of the benchmark suite is targeted broadly at cloud providers, cloud consumers, hardware vendors, virtualization software vendors, application software vendors, and academic researchers.  The SPEC Cloud Benchmark addresses the performance of infrastructure-as-a-service (IaaS) cloud platforms, either public or private.

    Dell has been a major contributor to the development of the SPEC Cloud IaaS Benchmark and is the first – and so far only – cloud vendor either private or public to successfully execute the benchmark specification tests and to publish its results.  This article explains this new cloud benchmark and Dell’s role and results.

    How it works and what is measured

    The benchmark is designed to stress both provisioning and runtime aspects of a cloud using two multi-instance I/O and CPU intensive workloads: one based on YCSB (Yahoo! Cloud Serving Benchmark) that uses the Cassandra NoSQL database to store and retrieve data in a manner representative of social media applications; and another representing big data analytics based on a K-Means clustering workload using Hadoop.  The Cloud under Test (CuT) can be based on either virtual machines (instances), containers, or bare metal.

    The architecture of the benchmark comprises two execution phases, Baseline and Elasticity + Scalability. In the baseline phase, peak performance for each workload running on the Cloud under Test (CuT) alone is determined in 5 separate test runs.  Data from the baseline phase is used to establish parameters for the Elasticity + Scalability phase.  In the Elasticity + Scalability phase, both workloads are run concurrently to determine elasticity and scalability metrics.  Each workload runs in multiple instances, referred to as an application instance (AI).  The benchmark instantiates multiple application instances during a run.  The application instances and the load they generate stress the provisioning as well as the run-time performance of the cloud.  The run-time aspects include CPU, memory, disk I/O, and network I/O of these instances running in the cloud.  The benchmark runs the workloads until specific quality of service (QoS) conditions are reached.  The tester can also limit the maximum number of application instances that are instantiated during a run.

    The key benchmark metrics are:

    • Scalability measures the total amount of work performed by application instances running in a cloud.  The aggregate work performed by one or more application instances should linearly scale in an ideal cloud.  Scalability is reported for the number of compliant application instances (AIs) completed and is an aggregate of workloads metrics for those AIs normalized against a set of reference metrics.
       
    • Elasticity measures whether the work performed by application instances scales linearly in a cloud when compared to the performance of application instances during the baseline phase.  Elasticity is expressed as a percentage.
       
    • Mean Instance Provisioning Time measures the time interval between the instance provisioning request and connectivity to port 22 on the instance.  This metric is an average across all instances in valid application instances.
       

    Benchmark status and results

    The SPEC Cloud IaaS benchmark standard was released on May 3, 2016, and more information can be found in the Standard Performance Evaluation Corporation’s announcement.  At the time of the release, Dell submitted the first and only result in the industry.  This result is based on the Dell Red Hat OpenStack Cloud Solution Reference Architecture comprised of Dell’s PowerEdge R720 and R720xd server platforms running the Red Hat OpenStack Platform 7 software suite.  The details of the result can be found on the SPEC Cloud IaaS 2016 results page

    Dell has been a major contributor to developing the SPEC Cloud IaaS benchmark standard right from the beginning from when the charter of SPEC Cloud Working Committee was drafted to when the benchmark was released.  So it is no surprise that Dell was the first company to publish a result based on the new Cloud benchmark standard.  Dell will continue to use the SPEC Cloud IaaS benchmark to compare and differentiate its cloud solutions and will additionally use the workloads for performance characterizations and optimizations for the benefit of its customers.

    At every opportunity, Dell will share how it is using the benchmark workloads to solve real world performance issues in the cloud.  On Wednesday, June 29th, 2016, I will be presenting a talk entitled “Measuring performance in the cloud: A scientific approach to an elastic problem” at the Red Hat Summit in San Francisco.  This presentation will include the use of SPEC Cloud IaaS Benchmark standard as a tool for evaluating the performance of the cloud.

    Computer benchmarking is no longer an academic exercise or a competition among vendors for bragging rights.  It has real benefits for customers, and now – with the creation of the SPEC Cloud IaaS 2016 Benchmark – it advances the state of the art of performance engineering for cloud computing.

      

  • Dell Cloud Manager Blueprint Designer Guide - providing a quick start for our customers with numerous samples and tutorials

    This Blog post is a follow-up to the previous blog post Cloud portability with TOSCA blueprints and Dell Cloud Manager

    The Blueprint function added to the latest version of Dell Cloud Manager (version 11) allows you to create complex multi-tier application stacks which include automated scaling and automated healing, and supports configuration management (Chef or Puppet) and Docker.  Blueprints enable Dell Cloud Manager users to quickly launch application stacks in virtual machines in a consistent fashion, with predictable costs and results.

    Want to know more?  The Blueprint Designer Guide documents in detail all aspects of designing and using Blueprints in Dell Cloud Manager: 

    Key information in the Blueprint Designer Guide includes 18 different template samples and matching tutorials demonstrating an assortment of different applications and combinations of possible attributes, such as using a load balancer, configuration management, automation, or Docker.  These sample templates and tutorials are intended to jump-start our users with developing their own Blueprints.  The table below summarizes the 18 template samples and tutorials: 

    Operating System Applications # Servers Load Balancer? Automation? Config. Mgmt. Docker?
    Linux Apache 1 No No No No
    Linux Apache 1 No Yes No No
    Linux Apache x 2 3 Yes Yes No No
    Linux Apache (with network selector) 1 No No No No
    Windows Apache 1 No No No No
    Linux Apache 1 No No Chef No
    Linux Apache 1 No Yes Chef No
    Linux Apache x 2 3 Yes Yes Chef No
    Linux Tomcat, Wildbook, MySQL 3 Yes Yes Chef No
    Linux Apache, Joomla, MySQL 2 No Yes Chef No
    Linux Apache, Wordpress, MySQL 2 No Yes Chef No
    Linux Apache, Drupal, MySQL 2 No Yes Chef No
    Linux Nagios 1 No No Chef No
    Windows Apache 1 No Yes Chef No
    Windows IIS Server, SQL Server 2 No Yes Chef No
    Linux Apache, Workpress, MySQL 3 Yes Yes Puppet No
    Linux Apache 1 No No No Yes
    Linux Tomcat, Wildbook, MySQL 1 No No No Yes

    The 18 template samples and tutorials were created by Dell Cloud Manager Client Services Engineering team members Gary Forghetti, Shaun Marshall, and Ewan Lyall. 

    Need more information on Dell Cloud Manager?  See the Dell Cloud Manager homepage.  General information on Dell Cloud Manager including capabilities and documentation can be found there.

    Need more information on Dell Cloud Manager Blueprints specifically?  See the Dell Cloud Manager Blueprint Designer Guide.  Details on designing and creating Blueprints can be found there.

  • Announcing the Dell Hybrid Cloud System for Microsoft

    Last week Michael Dell and Satya Nadella announced the industry first integrated system for Hybrid Cloud at Dell World. At Dell we believe that the future of cloud is Hybrid and for those IT organizations and Services providers looking to rapidly deploy a cloud solution, we have a fully integrated and modular system that can be customized to meet your needs.

     

    For a few years now, Dell and Microsoft have been working closely on bringing the learnings of building and operating one of the largest public clouds to the data center. The goal to provide an Azure-like experience to Enterprise customers and Service providers. Last year, we unveiled Cloud Platform System (CPS) Premium that has revolutionized how customer deploy and operate a private clouds at a large scale. Over the past year, we heard your feedback and have built a mid-size hybrid cloud with the same principles as CPS premium, but more modular with the ability to start small and pay as you grow. The Dell Hybrid Cloud System for Microsoft CPS Standard is the second of the Cloud Platform System (CPS) family of products.

     

    Key features include:

    1. Fully integrated cloud stack with System Center and Windows Azure Pack
    2. Integrated multi-cloud management and Azure IaaS with Dell Cloud Manager
    3. Discretely scalable compute and storage blocks.
    4. Non-Disruptive and Sequenced Patch and Update process with and Microsoft and Dell updates tested, validated and packaged quarterly.
    5. Integrated Hybrid Cloud Integration for Azure Backup, Azure Site Recovery and OpsInsight.
    6. Dell Financial Services models to convert CAPEX to OPEX with models based around consumption and lowering risk.

     

    Our goal is to enable you to confidently adopt cloud in your data center with predictable results with a solution aiming to lower your risk for adoption, streamline operations and simplify supply chain.

     

    Goto dell.com/dhcs for more information and stay tuned for more blogs on this topic.

  • UX Case Study: Blueprint Versioning


    This is the final post in a series of User Experience (UX) topics on the Dell Cloud Blog. The first four topics were UX Culture at Dell Cloud ManagerThe Benefits of a UI Pattern Library, Docs Day: UX Tested, Engineer Approved, and Best-in-Class User Research and Persona Building. We look forward to sharing more UX strategy with you in the future!


    Dell Cloud Manager recently added a customizable catalog feature that allows admin-level customers to upload blueprints and make them easy to deploy by their end users. In the original feature, the user experience (UX) team added support for uploading blueprints through the user interface. This was in addition to the ability for users to upload through the API. We received great feedback on the catalog and upload capabilities, but one key use case that was not in the first release was the ability to track versions. Through continuous UX research, we learned that users could benefit greatly from the ability to maintain and track multiple versions of a single blueprint. For example, an administrator could test a new version before making it publicly available in the catalog. Also, if the blueprint administrator discovered a problem with a particular version, they could roll back to a previous, more stable version. This missing support for versions became our next goal for feature release. 

    A Lean Team Effort

    At Dell Cloud Manager, we use lean teams to quickly research, design, develop, and test a new feature by fully dedicating a cross-functional team to focus on a measureable goal. The blueprint versioning feature was a lean team effort that included representatives from UX, front-end engineering, back-end engineering, and product management. All of the participants were remote, and all were fully dedicated to minimize distractions. This allowed us to work very quickly and deliver the feature to our customers in record time. The UX team kicked off the collaboration by presenting an initial set of mockups that were reviewed and discussed with the entire team. We considered numerous options when deciding how the feature could work and continuously revisited—and even refined—our primary goal. Once we came to a consensus, all team members worked in parallel, each of us with a common vision for the feature.  

    User Studies

    Three business days after the initial kick-off meeting, the UX team ran a set of hour-long usability studies. The participants completed core tasks using an interactive prototype, developed in collaboration with front-end engineering. This prototype eventually became our final software implementation. Over the next week, we iterated on the UI design and ultimately “hooked it up” to the back-end engineering work.

     The usability studies validated our assumptions, as well as revealed areas where we could improve our design and implementation. For example, we identified a subtle labeling issue. Users were tripped up by a dialog button label named “Edit version.” In this part of the workflow, users had already made their edits and wanted to “Save”, not “Edit.”  We also found design and implementation gaps. Users were confused as they created a new version because there was no feedback that the version had been successfully created. The screen refreshed to the initial, default view, and users were left wondering if their changes had been saved. These issues were identified and were quickly fixed.

    The most significant finding of the usability study related to our capability set. We experimented with the idea of allowing users to edit their blueprints directly in Dell Cloud Manager. However, we realized that the inline editor we provided was competing with the user’s external versioning system. If a user edited within Dell Cloud Manager, their version would not be recorded in their preferred version control system, so we decided to remove this feature. On the surface it might seem like we reduced the capabilities of Dell Cloud Manager, but in reality, it clarified the capability, reduced confusion, and led to a superior user experience overall.  By removing the inline editor, there is no longer confusion about where a user should edit files. And, there is no question about where version control is performed and managed. Using Dell Cloud Manager, our users can see their versions and switch between them. Any number of external tools can, and should, be used alongside Dell Cloud Manager to create and manage versions.

    Lean Team Impact

    The blueprint versioning feature was designed, developed, tested, documented, and deployed in 4.5 weeks. The tight collaboration of UX, engineering, and product management made it possible. The entire team was focused on building the essential components to support the best user experience. From reviewing the initial mockups to iterating on the UI design as a result of the usability study, the UX team was able to take user feedback and keep the lean team focused on the needs of the end user.


    The Dell Cloud Manager User Experience Team welcomes your feedback and suggestions! If you’d like to join our research panel and contribute your voice to the development of Dell Cloud Manager, please visit: http://www.enstratius.com/support/usability.