Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
Last week on Direct2Dell, I posted an overview of the Dell Boomi Application Integration solution allowing existing legacy applications to seamlessly integrate with state of the art SaaS solutions via an easy to use drag and drop wizard. In this post, I want to dig a bit deeper into the Dell Boomi solution to better detail how it works and why you should select it.
Why Dell Boomi?
Many customers first consider using custom code, an appliance, or a cloud edition as a solution to the integration issue. The following charts detail why Dell Boomi is a superior option to those choices:
Custom Code vs. Dell Boomi
Appliance vs. Dell Boomi
Cloud Edition vs. Dell Boomi
How Dell Boomi works?
The solution process for Dell Boomi follows three steps:
The Dell Boomi wizard allows customers to easily connect items from the legacy and SaaS solutions using a visual tool. The tool auto-generates up to 80% of mappings and leverages thousands of live data maps available in the Dell Boomi community to reduce integration setup time as well ensure your solution tracks to previously deployed solutions.
In a future Dell TechCenter blog post, I will review the various deployment methodologies for Dell Boomi.
A strong feature of the Dell Boomi solution is the fast growing and evolving online community for customers. The community contains the following resources:
For more information on Boomi select from these options:
Isohybrid tool is used for creating Hybrid images from bootable ISO images. The Hybrid image created by isohybrid tool can be used as an USB image or as an ISO image.
While creating hybrid images, the tool adds an MBR (Master Boot Record) in the first 512 bytes of the ISO image and pads the end of the image with zeros, so that the size of the final image is a multiple of 1M. Thus created MBR will have one partition listed, starting at offset zero (by default) and ending at the end of the hybrid image. For example fdisk -l on an isohybrid image would like the following:
linux# fdisk -l Centos60-Base.iso
Disk Centos60-Base.iso: 231 MB, 231735296 bytes
64 heads, 32 sectors/track, 221 cylinders, total 452608 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xf2d79d93
Device Boot Start End Blocks Id System
Centos60-Base.iso1 * 0 452607 226304 83 Linux
Usually BIOSes expect the MBR to be present at the first 512 bytes and the first partition to start after the first sector (first 512 bytes). Since these two overlap in the case of the Hybrid iso images, servers' BIOS gets confused and treats this image as a Floppy image and fails to boot with the message "isolinux.bin missing or corrupt". If the "USB Flash Drive emulation Type" is updated to "HDD/Hard Disk" from "Auto", in the BIOS, the image will start to boot. This setting will force the BIOS to read the MBR and boot to the USB key as if it were a HDD. Below are screenshots of the required BIOS changes in 8G, 9G, 10G, 11G servers respectively:
The other obvious workaround to this issue is to build a hybrid image so that the first partition starts at non-zero offset. Unfortunately, this brings forth other issues. According to the ISO 9660 specification, the first 32KB of an ISO image is un-used (mostly zeroed out). The isohybrid tool uses the first 512 bytes of this space to setup the MBR of the hybrid image. The partition that starts at a non-zero offset (between 0 and 32KB) will have all zeros at the start of the partition, instead of a superblock providing the filesystem information. This will cause the mount program to fail mounting the partition (ex: /dev/sda1), thereby causing a boot failure.
AFAIK xorriso seems to be the only tool which creates hybrid images with non-zero offset, which can be dual-mounted (both as overall device[/dev/sda] and as a partition[/dev/sda1]). More information about this tool can be found at the wiki page: http://libburnia-project.org/wiki/PartitionOffset.
To sum up, Hybrid ISO images created with zero offset boot-up with right BIOS changes. The images with non-zero offset don’t boot unless the boot scripts are updated to mount the overall device instead of the partition or right tools are used for creating images which support dual-mounting.
Welcome to my first post in a series of about the Top 5 Performance Killers in SharePoint Storage. With SharePoint exploding across organizations, many of these same organizations are going to suddenly feel the hit all that new user content stored in SharePoint has on SharePoint performance.
Exuberant use will increase the data stored in SharePoint, including large files and multiple versions of files. This isn’t a bad thing; in fact, it usually indicates good adoption and effective collaboration. But this explosion of the content database can cause an outcry from two quarters: end users complain about SharePoint’s slow performance, and SQL DBAs object that SharePoint is taking up too much expensive server space and processing power.
SharePoint Storage Performance Killer #1:
BLOBs, BLOBs and more BLOBs: How Unstructured Data is Taking Over SQL
Consider the core document types you store in SharePoint: PDFs, Word and PowerPoint files, large Excel spreadsheets. A document in any of these common file types is usually well over a megabyte, which means it is saved in SQL Server as a BLOB (Binary Large OBject). Having many BLOBs in SQL Server causes several issues.
First, BLOBs take up storage – a lot of storage. Whenever a user creates a new file in SharePoint, a new BLOB is created, increasing the size of the content database. If versioning is turned on in the SharePoint environment, then each new version of a file requires a new BLOB, so a 10 MB PowerPoint presentation that has been altered 10 times takes up 100 MB of space even though it is just one file! In fact, up to 90-95% of the storage overhead of a SharePoint content database in SQL Server is comprised of BLOBs (http://blogs.msdn.com/b/mikewat/archive/2008/08/19/improving-sharepoint-with-sql-server-2008.aspx.)
Moreover, those BLOBs aren’t just consuming space; they are also using server resources. A BLOB is unstructured data. Therefore, any time a user accesses any file in SharePoint, the BLOB has to be reassembled before it can be delivered back to the user – taking SQL processing power and time. And when SQL Server is reading data from the database files, it has to skip over BLOBs in the data files that aren't essential to processing a query, which also consumes processing power and further slows SharePoint performance.
The solution is to move BLOBs out of SQL Server and into a secondary storage location – specifically, a higher density storage array that is reasonably fast, like a file share or network attached storage (NAS).
This approach addresses both the storage and server resource issues that BLOBs cause. In fact, a recent Microsoft whitepaper reports that externalizing BLOBS in SharePoint reduces the size of the content database in SQL to just 2-5% of its original size. Moreover, externalized data was found to be 25% faster in end user recall times and 30.8% faster in user upload times (see this post here for more details). This performance improvement is due, in part, to increased BLOB transfer speeds between SQL Server and the client, since some of the file transfer can be handed off to the Windows Server operation system rather than the BLOB having to be processed and reassembled by the SQL Server process.
Microsoft offers two options for offloading BLOBs to less expensive, more efficient storage: external BLOB storage (EBS) and remote BLOB storage (RBS). EBS is Microsoft’s original BLOB management API, and Microsoft has established RBS as its successor technology. They are fully supported parts of the SQL Server ecosystem, but because they are fairly new, administrators are wary of using them. One concern is that offloaded BLOBs could be easily lost. Although this fear is unfounded, other concerns are legitimate. In particular, both EBS and RBS have limited management and offer no flexibility with rules for externalizing data. When used natively, these options only add to the complexity of managing SharePoint storage. For example, ordinary actions like deleting files or creating new versions leave behind orphaned BLOBs in the BLOB store, requiring manual cleanup. This restriction, the lack of administrative tools (it’s all done by T-SQL queries), and the relative inflexibility of the FILESTREAM RBS provider have led many architects to advise against using the native provider in a production environment without the help of third-party tools such as Quest Storage Maximizer.
Quest Storage Maximizer for SharePoint uses both EBS and RBS to make externalizing content easy. With Storage Maximizer for SharePoint, you can easily move large, old and unused SharePoint data from SQL storage to more efficient storage, like file shares and network attached storage devices, all while keeping the data accessible to the end user via SharePoint.
Storage Maximizer for SharePoint integrates into SharePoint’s Central Administration and offers an intuitive user interface where you can set the scope for externalizing documents from a farm down to a document library, specify the target for the external data, and define the rules for which data to externalize:
Figure 1. Storage Maximizer for SharePoint makes it easy to move large or old SharePoint data from SQL Server to more efficient storage, improving SharePoint performance.
Figure 2. Storage Maximizer enables you to select the scope of a storage definition job, from the farm down to the list or library.
By externalizing the content that slows your system, Storage Maximizer for SharePoint improves SharePoint performance for searching, uploading and accessing data, enhancing user productivity and satisfaction.
Stay tuned to this blog for the rest in this series of The Top 5 Performance Killers in SharePoint Storage!
Content by Dell Desktop Virtualization Engineer Daniel de Araujo @DFdeAraujo and Client Virtualization Specialist Lee Burnette @Lee4Dell. @DFdeAraujo @Lee4DellWe’ve all heard the arguments that hinder the adoption of desktop virtualization. Most argue the capital expenditure alone is too high to even discuss a virtual desktop infrastructure. Others believe the complexity and performance will decrease ROI and drive TCO higher than initially anticipated. Although I find some of these arguments to be true, I believe that simplifying the design, deployment, and maintenance of desktop virtualization is going to drive lower costs and be the catalyst for ultimate adoption of VDI.
A solution like no other, Dell DVS Simplified virtually eliminates the complexity and drastically reduces the design and deployment costs attributed to traditional VDI components. Let’s take a closer look on how this is done:
Reduction of Required Components
If you look at a typical VDI deployment, the following components are required to ensure functionality: • Load Balancers and Connection Brokers to manage desktop sessions and ensure high availability in the environment. • Compute servers to provision and manage virtualized desktops. • Management servers to control, monitor, and maintain the entire environment. • Shared storage with high-speed interconnects to store virtual desktops, golden images, and management databases.DVS Simplified removes the necessity for the aforementioned components because it contains everything you need for a VDI environment, with the exclusion of shared storage, within the appliance. Here’s a closer look:
Hassle-free Grid Architecture
Instead of having to use multiple management, configuration, and monitoring interfaces, DVS Simplified utilizes Citrix’s VDI-in-a-Box grid architecture to coordinate activities and balance desktops across each server connected to the same grid. This may sound confusing, so it might be easier to think of the grid as a physical fabric that allows each DVS Simplified appliance to communicate with each other. As a result of the grid, operations such as creating and managing golden images, templates, desktops and users can be performed at the grid level by connecting to any DVS Simplified console. For example, once a golden image is created or modified on an appliance, the appliance will automatically replicate the new image to the other appliance on the grid. There’s no need to transfer them manually.
Local Storage Optimization via Linked Clones
Since the DVS Simplified appliance only utilizes local storage, there’s a preconceived notion each desktop will occupy a large portion of disk space. For example, if the golden image (i.e., the image that you base your desktops on) is 20GB, many believe that each virtual desktop will occupy 20GB of space. This is simply not true. Through the use of linked clones, each virtual desktop image only occupies an estimated 15% of the original size of the golden image. As a result, the DVS Simplified appliance can save approximately 85% of local storage space. Dell Proven PerformanceIn addition to the simplicity of the solution, the DVS Simplified appliance provides the necessary components to provide the end-user with an acceptable virtual desktop experience. This is no small feat to achieve since one of the most difficult tasks when designing a VDI environment is selecting the right balance of CPU cores, memory capacity, network bandwidth and storage capacity and speed within a server. Why you ask? That main driving force is huge amount of small-block disk IO that all the virtual desktops generate. Since disks are shared amongst all the operating systems on the server, all requests to disk are randomized and cause storage performance to decrease substantially.In order to mitigate degradation of storage IO, the Dell Simplified appliance contains eight, 146GB 15K 6 Gbps SAS drives behind a PowerEdge RAID Controller (PERC) with 1 GB of cache. The eight drives provide enough IO capacity to satisfy a large number of desktops, while the 1 GB of cache accelerates read operations by caching frequently accessed data. As a result of caching, the PERC does not have to access the disk drives to read data; it can simply provide the data from the cache to the hypervisor quickly. Note that these drives are configured in a RAID-10 array to provide a better level of data redundancy and data-loss recovery than RAID 0.
Check out the video of the DVS Simplified Solution Setup
There are additional resources you can review about DVS Simplified at: http://content.dell.com/us/en/enterprise/d/flexible-computing/dvs-simplified.aspxFollow us on Twitter at @DellTechCenter, @Lee4Dell & @DFdeAraujo.
The Dell XPS 13 ultrabook is generating buzz in the business world in addition to the consumer world (click here to order an XPS 13). This is not surprising as IT professionals in the Dell TechCenter community are always looking for new technology that will help their users do their jobs more effectively. The XPS 13 also has the following features that are specifically for the business customers:
For a complete run-down of the XPS 13 features check out Susan Beebe's Direct2Dell blog.
There are many ways that business customers can leverage the XPS 13 in their environment. Some are looking for it to be a Bring Your Own Device (BYOD) option, while others, that are well invested in deploying and managing Windows, will choose to fully incorporate it into their environment. I've spoken with many customers that are familiar with the deployment and management options that are available with the Dell Enterprise class Dell systems (Latitude, Optiplex, and Precision) and want to know what capabilites are there for the XPS 13.
To help with these questions, I sat down with Chris Minaugh from Dell IT to discuss how Dell is deploying the XPS 13 internally. That discussion is captured in the short video below.
Leave a comment with your plans to use the XPS 13 in your enterprise or jump to the Dell TechCenter forums to continue the discussion with other IT pros!
This post was written by Rob Cox, Senior Engineering Manager, Dell OpenManage Essentials team
I want to start with a note of thanks to all of you who have downloaded the OME 1.0 Open Evaluation over the last 3 months. Lots of folks have been kind enough to give it a spin and post questions and feedback on DellTechCenter.
We have gotten a tremendous amount of positive feedback from you -- new feature requests, things we needed to make easier or clarify, and a few things we need to fix. I’ve pushed all of this information from the community to the engineering and marketing teams at Dell. We’ve done our best to include as many improvements from feedback as we could into OpenManage Essentials 1.0.1 and some of it we have put on the backlog to consider in a subsequent release.
For those who are new to OME, please use the Dell TechCenter OpenMange Essentials page as the main source of information on the product. We’ll keep it updated with new information, white papers, videos and tips. With the OME 1.0.1 refresh, we will be posting new whitepapers, videos, and updates of several that we had already published. If you have been using the Open Evaluation (Build 30), you can install 1.0.1 on top of that and it will upgrade.
There are a few incremental changes between OME 1.0 and OME 1.0.1. Here is a short list of things to look for:
Again, thanks for all of the feedback. Please continue to post on the forum keep that feedback coming. We also have a chat on 2/28 at 3PM US Central time if you want to interact with us directly.
Decision Support Systems (DSS) are designed to support complex analytical query activities using very large data sets. The user queries executed on a DSS database typically take several minutes (or even hours) to complete and require processing large amounts of data. A DSS query is often required to fetch millions of records from the database for processing. To support these queries, the database management system (DBMS) reads table data from the storage devices. The DBMS typically uses a sequential table scan, fetching numerous records from a table with a single request. The resultant read I/O to storage consists of large I/O blocks (approximately 512KB or 1MB in size). The large I/O requests require large I/O throughput rates from storage to the database server to provide optimal performance.
In addition to the significant I/O throughput required, the large data sets that are common to DSS user queries also require substantial processing resources (CPU and RAM), therefore the database server must be provided with sufficient processing and memory resources to accomplish the processing of the raw query results into the desired useable data that the user intends the query to produce.
The large I/O patterns and processing necessary in DSS queries warrant careful system design to ensure that the performance requirements are met by the storage arrays as well as the server and interconnect fabric. The overall performance of a DSS solution is determined by the performance characteristics of each component in the system. These include database and operating system settings, server resources, SAN design and switch settings, storage Multipath I/O (MPIO) software, storage resources, and storage design.
Dell EqualLogic 10GbE iSCSI storage arrays offer the high I/O throughput that is required for typical DSS applications. We recently conducted a series of tests to characterize DSS workloads on Dell PowerEdge servers, Dell PowerConnect network switches, and EqualLogic storage arrays. From those tests, best practices were validated and scalability guidelines have been noted for designing storage architectures that are best suited to support the storage demands of DSS applications running on Oracle 11g R2 databases.
Details of these tests as well as sizing and best practices guidelines for DSS workloads using EqualLogic storage arrays are available in this paper. The study showed an ideal building “block” can consist of one Oracle RAC node running on a PowerEdge R710 server with two EqualLogic PS6010XV arrays for the storage backend as shown below. Figure 1. Scalable Modular Block
Figure 2 below shows the I/O throughput available from these blocks. Figure 2. I/O Throughput Block Scalability
Check out the details from the paper here.
Dell TechCenter will be hosting a TechChat on Tuesday, February 28th, at 3PM CDT.
We have two great topics lined up around the new OpenManage Essentials and the newest Dell EqualLogic 10Ge arrays.
Highlights in the new OME 1.0.1 release include:
To unsubscribe from the Dell TechCenter distribution, please click "Email unsubscribe to this blog" on the TechCenter News Blog page or email "unsubscribe" to firstname.lastname@example.org.
Simply select the required optimizations, save the configuration for future reference and then click Run; your Virtual Desktop is now optimized. If you are launching this application on a template that has UAC enabled then right click and ‘Run as Administrator’.
The Quest Desktop Optimizer is part of the vWorkspace 7.5 media and is found in the 'Template_Tools' directory.
It is also interesting to monitor the template with and without optimizations so you can see the CPU and Memory savings for yourself.
To learn more about vWorkspace 7.5 in general, consider these resources. I specifically recommend reading the whitepaper ‘desktop virtualization: a cost and speed comparison.
Earlier today, Dell revealed the 12th Generation of PowerEdge servers at a San Francisco press conference led by Dell Chief Executive Officer, Michael Dell. At the press event, Dell shared how the new servers were designed with customer needs in mind, whether that means making servers less time consuming to monitor and manage, more powerful to handle larger amounts of data, or more efficient to save money over the life of the system.
Over the coming weeks, you'll hear more about the 12th Generation of servers and the technologies behind them and how to fully take advantage of these new Dell products and solutions in your environments. For the technical articles, whitepapers, videos, a forum, and online chats about the subject, DellTechCenter.com/12thGen is our landing page on the Dell TechCenter for all content relating to this launch.
Some Dell customers had the opportunity to test our Dell 12th Generation PowerEdge servers over the last several months. We're excited about how advancements on the new servers will benefit the IT professionals that use Dell products and we look forward to delivering even more technical information in the future. For now, see what customers have to say about improvements made on the Dell 12th Gen servers in a variety of IT technology areas:
The agent-free monitoring capabilities available with the new Dell™ PowerEdge™ servers will let us dedicate 100 percent of our CPU cycles to the scientific applications so we can maximize performance.” -- Dr. Tommy Minyard, Director of Advanced Computing Systems (ACS), Texas Advanced Computing Center, The University of Texas at Austin
“In managing and maintaining the high-performance computing systems at the University of Utah, we need to be very responsive to new requests and potential issues, around the clock. We rely on the Integrated Dell Remote Access Controller (iDRAC) to address problems remotely—sometimes from home, in the middle of the night—without having to run into the data center. The iDRAC7 with Lifecycle Controller available with the Dell™ PowerEdge™ 12th generation servers will help ensure that HPC resources are available whenever users need them. We couldn’t run without iDRAC.” -- Erik Brown, Systems Administrator, Center for High-Performance Computing, University of Utah
“With the introduction of PCIe-based solid-state drives (SSDs) in the new Dell™ PowerEdge™ servers, we anticipate breaking through the disk I/O boundary that was limiting server performance for our hosting customers. We’re making a big push into SSDs to boost overall throughput. Having hot-swappable SSD drives in the Dell PowerEdge servers will enable us to achieve that throughput while helping us ensure uptime.” -- IT Director, from a leading health insurance application service provider
“Local storage is becoming more important, making the Dell PowerEdge R720, with up to 16 internal disk drives, a good choice – particularly for I/O intensive databases. In our view, the high number of possible disk drives is a great development. We’ve wanted a server that for a long time.” -- Thomas Franken, Systems Architect, Datacentre & Infrastructure, PIRONET NDH
“One of our biggest IT challenges is power consumption. We are constantly looking for ways to maximize processing performance for our Web-based applications while minimizing the power consumed by hardware. Our evaluation of a new Dell™ PowerEdge™ server showed that these servers can deliver outstanding performance for data processing while using very little power. This means we can accommodate company growth by running more servers in our data center while staying within our power envelope and controlling our energy costs.” -- Matthew Woodings, Chief Technology Officer, HotSchedules
“The energy saving of the Dell PowerEdge M620 compared to the R710 is around 10 per cent. If you calculate this over a year it really adds up.” -- Max Wagener, Head of Enterprise Architecture, GfK
“Moving to 10 Gigabit Ethernet (10GbE) connectivity will change the way we deploy systems and Dell’s native support for the standard across its enterprise portfolio is very attractive. By capitalizing on the 10GbE connectivity option with Dell’s new PowerEdge servers, we will improve throughput allowing for increased workloads while significantly simplifying network management, and enabling better power and cooling.” -- Alex Rodriguez, Vice President of Systems Engineering and Product Development, Expedient
“We need a flexible approach to network connectivity so we can make changes without having to rip and replace servers. With the Dell™ Select Network Adapters on the Dell PowerEdge™ 12th generation servers, we have the flexibility to easily change vendors, technologies, or speeds. Administrators can simply pop out one card and plug in another.” -- IT Director, from a leading health insurance SaaS provider
“We have just seven IT administrators to manage more than 1,000 virtual and physical servers spread across multiple, geographically disparate facilities. We depend on systems management tools such as Dell OpenManage™ software and the Integrated Dell Remote Access Controller (iDRAC) to manage servers remotely around the clock, without having to physically travel to our data centers. We look forward to capitalizing on the integration of Dell OpenManage with VMware® vCenter™ and the new iDRAC7 with Lifecycle Controller capabilities available with the Dell™ PowerEdge™ 12th generation servers to further streamline server management and enable us to focus on more strategic projects.” -- Michael Ebeling, Senior Server System Analyst, Moneygram International
“We tested the Dell PowerEdge 12th-generation servers in a VMware environment and saw excellent performance. The operating system can be Microsoft or Linux – the results are just as good.” -- Hervé Bouvet, Server Infrastructure Manager, Sopra Group
“To succeed in the competitive financial services industry, we need to make sure that all of our services are available to employees and customers when they need them—we can’t afford downtime. By running redundant hypervisors on dual SD cards, the Dell™ PowerEdge™ 12th generation servers can help us maintain high availability for our virtualized environment. If one SD card goes down or needs to be replaced, we can continue to run our mission-critical applications and databases.” -- Michael Ebeling, Senior Server System Analyst, Moneygram International