Dell Community

Blog Group Posts
Application Performance Monitoring Blog Foglight APM 105
Blueprint for HPC - Blog Blueprint for High Performance Computing 0
Custom Solutions Engineering Blog Custom Solutions Engineering 8
Data Security Data Security 8
Dell Big Data - Blog Dell Big Data 68
Dell Cloud Blog Cloud 42
Dell Cloud OpenStack Solutions - Blog Dell Cloud OpenStack Solutions 0
Dell Lifecycle Controller Integration for SCVMM - Blog Dell Lifecycle Controller Integration for SCVMM 0
Dell Premier - Blog Dell Premier 3
Dell TechCenter TechCenter 1,858
Desktop Authority Desktop Authority 25
Featured Content - Blog Featured Content 0
Foglight for Databases Foglight for Databases 35
Foglight for Virtualization and Storage Management Virtualization Infrastructure Management 256
General HPC High Performance Computing 227
High Performance Computing - Blog High Performance Computing 35
Hotfixes vWorkspace 66
HPC Community Blogs High Performance Computing 27
HPC GPU Computing High Performance Computing 18
HPC Power and Cooling High Performance Computing 4
HPC Storage and File Systems High Performance Computing 21
Information Management Welcome to the Dell Software Information Management blog! Our top experts discuss big data, predictive analytics, database management, data replication, and more. Information Management 229
KACE Blog KACE 143
Life Sciences High Performance Computing 9
OMIMSSC - Blogs OMIMSSC 0
On Demand Services Dell On-Demand 3
Open Networking: The Whale that swallowed SDN TechCenter 0
Product Releases vWorkspace 13
Security - Blog Security 3
SharePoint for All SharePoint for All 388
Statistica Statistica 24
Systems Developed by and for Developers Dell Big Data 1
TechCenter News TechCenter Extras 47
The NFV Cloud Community Blog The NFV Cloud Community 0
Thought Leadership Service Provider Solutions 0
vWorkspace - Blog vWorkspace 511
Windows 10 IoT Enterprise (WIE10) - Blog Wyse Thin Clients running Windows 10 IoT Enterprise Windows 10 IoT Enterprise (WIE10) 4
Latest Blog Posts
  • Dell TechCenter

    Digging Deep into Dell Boomi – How does it work?

    Last week on Direct2Dell, I posted an overview of the Dell Boomi Application Integration solution allowing existing legacy applications to seamlessly integrate with state of the art SaaS solutions via an easy to use drag and drop wizard. In this post, I want to dig a bit deeper into the Dell Boomi solution to better detail how it works and why you should select it.

    Why Dell Boomi?

    Many customers first consider using custom code, an appliance, or a cloud edition as a solution to the integration issue. The following charts detail why Dell Boomi is a superior option to those choices:

    Custom Code vs. Dell Boomi

    Appliance vs. Dell Boomi

    Cloud Edition vs. Dell Boomi

    How Dell Boomi works?

    The solution process for Dell Boomi follows three steps:

     

    The Dell Boomi wizard allows customers to easily connect items from the legacy and SaaS solutions using a visual tool. The tool auto-generates up to 80% of mappings and leverages thousands of live data maps available in the Dell Boomi community to reduce integration setup time as well ensure your solution tracks to previously deployed solutions.

    In a future Dell TechCenter blog post, I will review the various deployment methodologies for Dell Boomi.

    Boomi Community

    A strong feature of the Dell Boomi solution is the fast growing and evolving online community for customers. The community contains the following resources:

    • Open Platform – Free Connector SDK
    • Community Supported Connectors – 100+ application connectors available
    • Large Integration Partner Ecosystem
    • Customer Feedback Drives Development – 25% of enhancements requests are in product
    • Boomi Suggest  - 2,000,000 mapping pairs indexed; 16,000+ functions indexed

    For more information on Boomi select from these options:

     

  • Dell TechCenter

    Hybrid ISO images on Dell servers

    Isohybrid tool is used for creating Hybrid images from bootable ISO images. The Hybrid image created by isohybrid tool can be used as an USB image or as an ISO image.

    While creating hybrid images, the tool adds an MBR (Master Boot Record) in the first 512 bytes of the ISO image and pads the end of the image with zeros, so that the size of the final image is a multiple of 1M. Thus created MBR will have one partition listed, starting at offset zero (by default) and ending at the end of the hybrid image. For example fdisk -l on an isohybrid image would like the following:

     

    linux# fdisk -l Centos60-Base.iso

    Disk Centos60-Base.iso: 231 MB, 231735296 bytes

    64 heads, 32 sectors/track, 221 cylinders, total 452608 sectors

    Units = sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk identifier: 0xf2d79d93

     

         Device        Boot      Start         End      Blocks   Id  System

    Centos60-Base.iso1   *           0      452607      226304   83  Linux

     

    Usually BIOSes expect the MBR to be present at the first 512 bytes and the first partition to start after the first sector (first 512 bytes). Since these two overlap in the case of the Hybrid iso images, servers' BIOS gets confused and treats this image as a Floppy image and fails to boot with the message "isolinux.bin missing or corrupt". If the "USB Flash Drive emulation Type" is updated to "HDD/Hard Disk" from "Auto", in the BIOS, the image will start to boot. This setting will force the BIOS to read the MBR and boot to the USB key as if it were a HDD. Below are screenshots of the required BIOS changes in 8G, 9G, 10G, 11G servers respectively:

     

     

     

     

    The other obvious workaround to this issue is to build a hybrid image so that the first partition starts at non-zero offset. Unfortunately, this brings forth other issues. According to the ISO 9660 specification, the first 32KB of an ISO image is un-used (mostly zeroed out). The isohybrid tool uses the first 512 bytes of this space to setup the MBR of the hybrid image. The partition that starts at a non-zero offset (between 0 and 32KB) will have all zeros at the start of the partition, instead of a superblock providing the filesystem information. This will cause the mount program to fail mounting the partition (ex: /dev/sda1), thereby causing a boot failure.

     

     AFAIK xorriso seems to be the only tool which creates hybrid images with non-zero offset, which can be dual-mounted (both as overall device[/dev/sda] and as a partition[/dev/sda1]). More information about this tool can be found at the wiki page: http://libburnia-project.org/wiki/PartitionOffset.

     

    To sum up, Hybrid ISO images created with zero offset boot-up with right BIOS changes. The images with non-zero offset don’t boot unless the boot scripts are updated to mount the overall device instead of the partition or right tools are used for creating images which support dual-mounting.

  • SharePoint for All

    Beware the BLOB taking over your SQL – the 1st Post of the Top 5 Performance Killers in #SharePoint Storage! #qSharePoint

    Welcome to my first post in a series of about the Top 5 Performance Killers in SharePoint Storage. With SharePoint exploding across organizations, many of these same organizations are going to suddenly feel the hit all that new user content stored in SharePoint has on SharePoint performance.

    Exuberant use will increase the data stored in SharePoint, including large files and multiple versions of files. This isn’t a bad thing; in fact, it usually indicates good adoption and effective collaboration. But this explosion of the content database can cause an outcry from two quarters: end users complain about SharePoint’s slow performance, and SQL DBAs object that SharePoint is taking up too much expensive server space and processing power.

    SharePoint Storage Performance Killer #1:

    BLOBs, BLOBs and more BLOBs: How Unstructured Data is Taking Over SQL

    Consider the core document types you store in SharePoint: PDFs, Word and PowerPoint files, large Excel spreadsheets. A document in any of these common file types is usually well over a megabyte, which means it is saved in SQL Server as a BLOB (Binary Large OBject). Having many BLOBs in SQL Server causes several issues.

    First, BLOBs take up storage – a lot of storage. Whenever a user creates a new file in SharePoint, a new BLOB is created, increasing the size of the content database. If versioning is turned on in the SharePoint environment, then each new version of a file requires a new BLOB, so a 10 MB PowerPoint presentation that has been altered 10 times takes up 100 MB of space even though it is just one file! In fact, up to 90-95% of the storage overhead of a SharePoint content database in SQL Server is comprised of BLOBs (http://blogs.msdn.com/b/mikewat/archive/2008/08/19/improving-sharepoint-with-sql-server-2008.aspx.)

    Moreover, those BLOBs aren’t just consuming space; they are also using server resources. A BLOB is unstructured data. Therefore, any time a user accesses any file in SharePoint, the BLOB has to be reassembled before it can be delivered back to the user – taking SQL processing power and time. And when SQL Server is reading data from the database files, it has to skip over BLOBs in the data files that aren't essential to processing a query, which also consumes processing power and further slows SharePoint performance.

    The Solution

    The solution is to move BLOBs out of SQL Server and into a secondary storage location – specifically, a higher density storage array that is reasonably fast, like a file share or network attached storage (NAS).

    This approach addresses both the storage and server resource issues that BLOBs cause. In fact, a recent Microsoft whitepaper reports that externalizing BLOBS in SharePoint reduces the size of the content database in SQL to just 2-5% of its original size. Moreover, externalized data was found to be 25% faster in end user recall times and 30.8% faster in user upload times (see this post here for more details). This performance improvement is due, in part, to increased BLOB transfer speeds between SQL Server and the client, since some of the file transfer can be handed off to the Windows Server operation system rather than the BLOB having to be processed and reassembled by the SQL Server process.

    Microsoft offers two options for offloading BLOBs to less expensive, more efficient storage: external BLOB storage (EBS) and remote BLOB storage (RBS). EBS is Microsoft’s original BLOB management API, and Microsoft has established RBS as its successor technology. They are fully supported parts of the SQL Server ecosystem, but because they are fairly new, administrators are wary of using them. One concern is that offloaded BLOBs could be easily lost. Although this fear is unfounded, other concerns are legitimate. In particular, both EBS and RBS have limited management and offer no flexibility with rules for externalizing data. When used natively, these options only add to the complexity of managing SharePoint storage. For example, ordinary actions like deleting files or creating new versions leave behind orphaned BLOBs in the BLOB store, requiring manual cleanup. This restriction, the lack of administrative tools (it’s all done by T-SQL queries), and the relative inflexibility of the FILESTREAM RBS provider have led many architects to advise against using the native provider in a production environment without the help of third-party tools such as Quest Storage Maximizer.

    Quest Storage Maximizer for SharePoint uses both EBS and RBS to make externalizing content easy. With Storage Maximizer for SharePoint, you can easily move large, old and unused SharePoint data from SQL storage to more efficient storage, like file shares and network attached storage devices, all while keeping the data accessible to the end user via SharePoint.

    Storage Maximizer for SharePoint integrates into SharePoint’s Central Administration and offers an intuitive user interface where you can set the scope for externalizing documents from a farm down to a document library, specify the target for the external data, and define the rules for which data to externalize:

       

    Figure 1. Storage Maximizer for SharePoint makes it easy to move large or old SharePoint data from SQL Server to more efficient storage, improving SharePoint performance.

        

    Figure 2. Storage Maximizer enables you to select the scope of a storage definition job, from the farm down to the list or library.

    By externalizing the content that slows your system, Storage Maximizer for SharePoint improves SharePoint performance for searching, uploading and accessing data, enhancing user productivity and satisfaction.

    Stay tuned to this blog for the rest in this series of The Top 5 Performance Killers in SharePoint Storage!

  • Dell TechCenter

    DVS Simplified: An Inside Look at Simplicity

    Content by Dell Desktop Virtualization Engineer Daniel de Araujo @DFdeAraujo and Client Virtualization Specialist Lee Burnette @Lee4Dell.

              
    @DFdeAraujo           @Lee4Dell

    We’ve all heard the arguments that hinder the adoption of desktop virtualization.  Most argue the capital expenditure alone is too high to even discuss a virtual desktop infrastructure.  Others believe the complexity and performance will decrease ROI and drive TCO higher than initially anticipated.  Although I find some of these arguments to be true, I believe that simplifying the design, deployment, and maintenance of desktop virtualization is going to drive lower costs and be the catalyst for ultimate adoption of VDI. 

    A solution like no other, Dell DVS Simplified virtually eliminates the complexity and drastically reduces the design and deployment costs attributed to traditional VDI components.  Let’s take a closer look on how this is done:

    Reduction of Required Components



    If you look at a typical VDI deployment, the following components are required to ensure functionality:

         • Load Balancers and Connection Brokers to manage desktop sessions and ensure high availability
           in the environment. 
         • Compute servers to provision and manage virtualized desktops.
         • Management servers to control, monitor, and maintain the entire environment.
         • Shared storage with high-speed interconnects to store virtual desktops, golden images, and
           management databases.

    DVS Simplified removes the necessity for the aforementioned components because it contains everything you need for a VDI environment, with the exclusion of shared storage, within the appliance.  Here’s a closer look:

    Hassle-free Grid Architecture



    Instead of having to use multiple management, configuration, and monitoring interfaces, DVS Simplified utilizes Citrix’s VDI-in-a-Box grid architecture to coordinate activities and balance desktops across each server connected to the same grid.  This may sound confusing, so it might be easier to think of the grid as a physical fabric that allows each DVS Simplified appliance to communicate with each other.   As a result of the grid, operations such as creating and managing golden images, templates, desktops and users can be performed at the grid level by connecting to any DVS Simplified console.  For example, once a golden image is created or modified on an appliance, the appliance will automatically replicate the new image to the other appliance on the grid.  There’s no need to transfer them manually. 

    Local Storage Optimization via Linked Clones



    Since the DVS Simplified appliance only utilizes local storage, there’s a preconceived notion each desktop will occupy a large portion of disk space.  For example, if the golden image (i.e., the image that you base your desktops on) is 20GB, many believe that each virtual desktop will occupy 20GB of space.  This is simply not true.  Through the use of linked clones, each virtual desktop image only occupies an estimated 15% of the original size of the golden image.  As a result, the DVS Simplified appliance can save approximately 85% of local storage space.

    Dell Proven Performance

    In addition to the simplicity of the solution, the DVS Simplified appliance provides the necessary components to provide the end-user with an acceptable virtual desktop experience.    This is no small feat to achieve since one of the most difficult tasks when designing a VDI environment is selecting the right balance of CPU cores, memory capacity, network bandwidth and storage capacity and speed within a server.  Why you ask?  That main driving force is huge amount of small-block disk IO that all the virtual desktops generate.  Since disks are shared amongst all the operating systems on the server, all requests to disk are randomized and cause storage performance to decrease substantially.

    In order to mitigate degradation of storage IO, the Dell Simplified appliance contains eight, 146GB 15K 6 Gbps SAS drives behind a PowerEdge RAID Controller (PERC) with 1 GB of cache.   The eight drives provide enough IO capacity to satisfy a large number of desktops, while the 1 GB of cache accelerates read operations by caching frequently accessed data.  As a result of caching, the PERC does not have to access the disk drives to read data; it can simply provide the data from the cache to the hypervisor quickly.  Note that these drives are configured in a RAID-10 array to provide a better level of data redundancy and data-loss recovery than RAID 0.

    Check out the video of the DVS Simplified Solution Setup