Dell Community

Blog Group Posts
Application Performance Monitoring Blog Foglight APM 105
Blueprint for HPC - Blog Blueprint for High Performance Computing 0
Custom Solutions Engineering Blog Custom Solutions Engineering 8
Data Security Data Security 8
Dell Big Data - Blog Dell Big Data 68
Dell Cloud Blog Cloud 42
Dell Cloud OpenStack Solutions - Blog Dell Cloud OpenStack Solutions 0
Dell Lifecycle Controller Integration for SCVMM - Blog Dell Lifecycle Controller Integration for SCVMM 0
Dell Premier - Blog Dell Premier 3
Dell TechCenter TechCenter 1,858
Desktop Authority Desktop Authority 25
Featured Content - Blog Featured Content 0
Foglight for Databases Foglight for Databases 35
Foglight for Virtualization and Storage Management Virtualization Infrastructure Management 256
General HPC High Performance Computing 227
High Performance Computing - Blog High Performance Computing 35
Hotfixes vWorkspace 66
HPC Community Blogs High Performance Computing 27
HPC GPU Computing High Performance Computing 18
HPC Power and Cooling High Performance Computing 4
HPC Storage and File Systems High Performance Computing 21
Information Management Welcome to the Dell Software Information Management blog! Our top experts discuss big data, predictive analytics, database management, data replication, and more. Information Management 229
KACE Blog KACE 143
Life Sciences High Performance Computing 9
OMIMSSC - Blogs OMIMSSC 0
On Demand Services Dell On-Demand 3
Open Networking: The Whale that swallowed SDN TechCenter 0
Product Releases vWorkspace 13
Security - Blog Security 3
SharePoint for All SharePoint for All 388
Statistica Statistica 24
Systems Developed by and for Developers Dell Big Data 1
TechCenter News TechCenter Extras 47
The NFV Cloud Community Blog The NFV Cloud Community 0
Thought Leadership Service Provider Solutions 0
vWorkspace - Blog vWorkspace 511
Windows 10 IoT Enterprise (WIE10) - Blog Wyse Thin Clients running Windows 10 IoT Enterprise Windows 10 IoT Enterprise (WIE10) 4
Latest Blog Posts
  • KACE Blog

    K2000 Kloser Look: Pre-Installation Task Only Deployment Task Engine

    We get asked from time to time how to deliver a deployment that only includes Pre-installation tasks. Perhaps you want to format devices before sending them out for donation/recycling or you want to perform an Offline User State migration without deploying an OS onto the device. Whatever the reason, this can be accomplished in a few quick steps!

    1. First, create a folder on your desktop and name it No Source Media. Within this folder, create a new text document and also name it “No Source Media”.
    2. Next, open up your K2000 Media Manager and enter your Hostname and Samba Share Password. For a name, enter “No Source Media”, select Windows x86 (or x64 if you plan to run 64-bit specific tasks) and browse to the folder you created on your Desktop. This will upload “blank” Source Media to be used with a Scripted Installation.
    3. Now you will need to create a new Scripted Installation, name it appropriately (Wipe Drives – No Source Media, for example) and choose the new No Source Media as your OS media and use the No Answer File option.
    4. You will now have the option to add Tasks to your deployment. Select only Pre-Installation Tasks and click Next and Finish to save your new deployment.

    When you boot to your KBE, this “deployment” will be available to execute tasks you have associated. When the deployment reaches the OS installation portion of the deployment, it will fail and you will want to shutdown or restart the computer using the options provided.

  • Dell TechCenter

    USB3 Kernel Debugging with Dell PowerEdge 13G Servers

    This blog was originally written by Thomas Cantwell, Deepak Kumar and Gobind Vijayakumar from DELL OS Engineering Team. 

    Introduction -

    Dell PowerEdge 13G servers (Dell PowerEdge 13G servers) are the first generation of Dell servers to have USB 3.0 ports across the entire portfolio.  This provides a significant improvement in data transfer speed over USB 2.0.  In addition, it offers an alternative method to debug Microsoft Windows OS, starting with Windows 8/Windows Server 2012 and later.

    Background –

    Microsoft Windows versions have had the ability to kernel debug using USB 2.0 since Windows 7/Windows Server 2008 R2, though there were some significant limitations to debugging over USB2.

    1)      The first port had to be used (with a few exceptions – see http://msdn.microsoft.com/en-us/library/windows/hardware/ff556869(v=vs.85).aspx ). 

    2)      Only a single port could be set for debugging.

    3)      You had to use a special hardware device on the USB connection. (http://www.semiconductorstore.com/cart/pc/viewPrd.asp?idproduct=12083).

    4)      It was not usable in all instances – in some cases, a reboot with the device attached would hang the system during BIOS POST.  The device would have to be removed to finish the reboot and could be reattached when the OS started booting.  This precluded debugging of the boot path.

    USB 3.0 was designed from the ground up, by both Intel and Microsoft, to support Windows OS debugging and has much higher throughput and is not limited to a single port for debugging.

    Hardware Support - 

    As previously stated, Dell PowerEdge 13G servers will now support not only USB 3.0 (also known as SuperSpeed USB), but also USB 3.0 kernel debugging. 

    BIOS settings -

    Enter the BIOS and enable USB 3.0 – it’s under the integrated devices category (By default, it is set to Disabled).

    • IMPORTANT!  ONLY enable USB 3.0 if the operating system has support!  Windows 8/Windows Server 2012 and later have this capability.  If you enable this and the OS does NOT have support, you will lose USB keyboard/mouse support when the OS boots.

    Ports –

    • USB 3.0 ports on Dell PowerEdge 13G servers can be used for Windows debugging (with USB 3.0 enabled and the proper OS support). 
    • Some systems, such as the Dell PowerEdge T630, also have a front USB 3.0 port.  The Dell PowerEdge R630/730/730XD have only rear USB 3.0 ports.  The Dell PowerEdge M630 blade also has one USB 3.0 front port.

    Driver/OS support -

    USB 3.0 drivers are native in Windows Server 2012 and Windows Server 2012 R2. There is no support for USB 3.0 debugging in any prior OS versions.

    USB3 Debugging Prerequisites –

    • A host system with xHCI(USB 3.0) Host Controller.  The USB 3.0 ports on the host system do NOT need USB 3.0 debug support – only the target system must have that.
    • A target system with xHCI(USB 3.0) Host Controller that supports debugging.
    • A USB 3.0 (A-A) crossover cable.You can get the cable from many vendors and we have provided one option below.
      • Note: The USB 3.0 specification states that pin 1 (VBUS), 2 (D-), and 3 (D+) are not connected. This means that the cable is NOT backwards compatible with USB 2.0 devices.
    • http://www.datapro.net/products/usb-3-0-super-speed-a-a-debugging-cable.html

    Steps to Setup Debugging Environment -

    1. Make sure USB 3.0 is enabled in the BIOS on both host and target.
      1. All Dell 13G servers support USB 3.0 debugging.
      2. Verify OS on host and target systems.
        1. OS must be Win8/WS2012 or Win 8.1/WS2012R2 on both.  For the debug host, a client OS is perfectly fine for debugging a server OS on the debug target. 
        2. It is strongly recommended to use Windows 8.1 and/or Windows Server 2012R2 on the host to ensure you can get the latest supported Windows debugging software – and you can then debug all current and older OS versions (that support USB 3.0 debugging) from the host.

                                                                   i.      http://msdn.microsoft.com/en-us/library/windows/hardware/ff551063(v=vs.85).aspx

    2.On the target system with the help of the USBView tool, locate the specific xHCI controller which supports USB debugging. This tool is the part of the windows debugging tools.  See the following link: http://msdn.microsoft.com/en-in/library/windows/hardware/ff560019(v=vs.85).aspx .

    To run this tool, you must also install .Net 3.5. It is not installed by default on either Windows 8/2012 or Windows 8.1/2012R2.

    1. On Dell PowerEdge 13G servers, there are several USB controllers – some will be designated “EHCI” – this is a USB 2.0 controller.  The controller for USB 3.0 will be designated “xHCI”, as we see below – this is the USB 3.0 controller.
      1. You can see the bus-device-function number of the specific xHCI controller which will be used for debugging below – this is important for proper setup of USB 3.0 debugging

    2.Next, you need to find the specific physical USB port you are going to use for debugging.

    1. For Dell servers the “SuperSpeed” logo is presented beside any USB3 ports

    • Connect any device to the port you wish to use for debugging
    • Observe changes in USBView (you may have the refresh the view to see the new device inserted in the port)
    • Verify the port does indeed show it is “Debug Capable” and has the “SS” on the port.    

     

     

    1. Operating System Setup – Target system
      1. Open elevated command prompt on target system and run following commands.
    • bcdedit /debug on
    • bcdedit /dbgsettings usb targetname:Dell_Debug (Any valid test name can be given here)
    • bcdedit /set "{dbgsettings}" busparams b.d.f(here provide bus, device and function number of the required xHCI Controller)
      • Ex : bcdedit /set "{dbgsettings}"busparams 0.20.0 . 
      • Busparams settings are important since Dell PowerEdge 13G systems have multiple USB controllers.  This ensures debugging is enabled only for the USB 3.0 (xHCI) controller.
      • Reboot server after making the changes above!

    2. Connect the host and target system by using the USB A-A crossover cable. Use the port identified at above.

    Steps to Start the Debugging Session :-

    Open compatible version of WinDbg as administrator (very important!). 

    1. Starting the debug session as administrator will ensure the USB debug driver loads.
    2. For USB 3.0 debugging, the OS on the host must be Windows 8/Server 2012 or later and match the “bitness” of the target OS - either 32-bit (x86) or 64-bit (x64). If you are debugging a 64-bit OS (all 2012+ Windows Server versions are 64-bit), then the host OS should be 64-bit as well.
    3. Open File->Kernel Debug->USB and provide the target name you set on the target (in our example,  targetname: Dell_Debug) . Click OK


     

    4.USB debug driver will be installed on host system. This can be checked in the Device Manager (see below) and for successful debugging there should not be a yellow bang on this driver. It is OK that it says “USB 2.0 Debug Connection Device” – this is also the USB 3.0 driver (works for both transports). This driver is installed the first time USB debugging is invoked. 

    Notes:

    1. If you are debugging for an extended time, disable the “selective suspend” capability for the USB Hub under the xHCI controller where USB debug cable is connected.

    In Device Manager:

    1. Choose the “View” menu
    2. Choose “View devices by connection”.  There are multiple USB hubs, so to get the correct hub, you need to find the hub that is under the xHCI controller.
    3. Navigate the specific xHCI controller.
    4. Under the xHCI controller, you will see  USB Hub. 
    5. Choose Properties for the USB Hub, and then Power Management and uncheck “Allow the computer to turn off this device to save power” (see picture below).

     

    Summary – Dell PowerEdge 13G servers with USB 3.0 provides a significant new method to debug modern Windows operating systems.

    • USB 3.0 is fast.
    • USB 3.0 debugging is simple to set up.
    • USB 3.0 debugging requires a minimal hardware investment (special cabling).
    • For Dell PowerEdge 13G blades (M630), this provides a new way to debug an individual blade.  Prior methods to debug Dell blades used the CMC (Chassis Management Controller), and routed serial output from the blade to the CMC – harder to configure and limited to serial port speeds.
    • A nice comparison of debug transport speeds – somewhat dated, but gives a good general idea on the speeds.

    Transport Throughput (KB/s) Faster than Serial
    Serial Port 10 0%
    Serial Over Named Pipe 50 500%
    USB2 EHCI 150 1500%
    USB2 on the Go 2000 20000%
    KDNET 2400 24000%
    USB3 5000 50000%
    1394 17000 170000%
  • General HPC

    The Australian National University Builds On-Demand, High Performance Science Cloud Node

    Housed at the Australian National University (ANU) in Canberra, the National Computational Infrastructure (NCI) has routinely hosted many of the country's scientific organizations, which routinely perform data-intensive computations on the university's Raijin supercomputer.  Facing greater and more complex demand,  NCI was recently tasked with building an on-demand, high-performance "science cloud" node.

    The need for the compute environment to react easily when highly data-intensive information is suddenly fed through the supercomputer - even when it is running at 90 percent to full capacity - and the ability to string together existing complex environments were imperative.  These requirements meant the Open-Stack based cloud had to be designed to be flexible in order to support such dynamic workflow processes.

    To create the science cloud, NCI used a 3,200-core compute cloud using Intel Xeon CPUs housed within 208 Dell PowerEdge C8220 and C8220X compute nodes, 13 Dell PowerEdge C8000XD storage nodes, and PowerEdge R620 rack servers. This allowed NCI to meet its design goals of establishing a cloud capability complementing and extending the existing investment in supercomputing and storage, while addressing any potential end-user shortcomings.

    The Australian government has provided over $100 million (Australian) into developing the hardware infrastructure, and necessary support tools and virtual laboratories.

    You can read about the NCI's science cloud in greater detail here.

  • General HPC

    Enterprise Strategy Group Impressed with Our Big Data Analytics Solutions

    by Joey Jablonski

    Recently the Enterprise Strategy Group (ESG) published an interesting white paper on Dell's big data and analytics solutions. Geared toward an important segment of the market that heretofore has been underserved, the company's versatile array of products and services are seen as a faster way for customers to realize value from any number of data sources.

     The white paper highlights three important themes of:

    • Commitment to Big Data -  Having entered the Big Data market almost five years ago, the company understood that a deep commitment - including the necessary investment and dedication to clients - was essential to be successful in this growing market.
    • Comprehensive offering - The Big Data market is still relatively new, and customers need greater collaboration to navigate it.  Offering a complementary software and servers, storage and networking equipment with professional services provides the needed support customers seek, and opens up new markets.
    •  Driving industry  -  New appliances - like the soon-to-be released Cloudera In-Memory Appliance  - shows a continued dedication to emerging technology that allows the company to drive the industry and help the Big Data and analytics market evolve.

     The ESG white paper argues that these big data and analytics solutions offer a large, yet still relatively untapped market, the opportunity to focus on the business outcomes to be realized rather than the complexity of IT needs. 

    The ESG white paper in its entirety is attached.

  • Information Management

    Using Oracle Enterprise Manager? See How Toad™ DBA Suite for Oracle® Complements It – Part 2 of 3

    Under the Hood: Database Maintenance

    In my last post I described the Toad™ DBA Suite for Oracle® technical brief we recently released, and how the Toad DBA Suite can make performance management – diagnosis, tuning SQL and indexing – much easier for you.

    In this post I’ll summarize the section of the technical brief on database maintenance, which includes common tasks like running scripts, generating reports, assessing security threats and comparing schemas and databases. As in the last post, you can accomplish all these tasks in Oracle Enterprise Manager (OEM) alone, but I think it takes too long. We built Toad DBA Suite so you can do it more efficiently.

    Creating a table similar to an existing table – with easier navigation

    This simple, common task brings up two typical advantages of Toad DBA Suite: Not only does it take fewer steps to execute, but you don’t have the overhead of launching a new web page at every step. You open the Schema Browser, right-click the desired table and select Create Like.

    You’re accomplishing the same thing in OEM, but with a new web page at almost every step: Targets, Databases, Instance, Administration, Schema Tables, etc. It may be convenient for maintaining a database from a computer anywhere in the world, but if you do most of your database administration from your regular workstation, it’s a big downside for a little upside.

    Getting an overview of all your databases – in one window

    Suppose you want an overview of the databases you’re managing, several of which may be running on the same host. You’re likely to want information on the data files, memory layout or top sessions.

    With OEM you can find CPU information on the Database Home Page, but memory and tablespace layout are on two other pages.

    The Database Browser in Toad DBA Suite shows you information on all of your managed databases, combined in a single view, with access to database and schema objects in the navigator. It gives you a bigger picture of all your managed databases instead of a picture of just one database at a time.

    Mind you, each of these products has database maintenance features that the other does not have, and page 11 of the technical brief includes a detailed feature comparison. But besides the convenient overview in the Database Browser, Toad’s other big advantage in database maintenance is the Automation Designer, which lets you set up a group of tasks and run them against multiple managed databases, rather than running them separately for each database. That can save you a lot of time and drudgery.

    One more thing: If you ever needed to run any of these tasks while you’re out of the office, MyToad enables you to run Apps and Actions created in Automation Designer remotely from a mobile device.

    Coming up

  • KACE Blog

    K1000 Kloser Look: Metering 101

    The K1000 metering tool received a major upgrade in 5.5 with the addition of the new Software Catalog. The catalog allows you to more precisely meter suite application than ever before. This is a simple 3 step process to begin metering your applications to see long term usage data to help you save money come license renewal time. In 6.0 that same catalog improved application control also, but we’ll talk about that another time.

    To meter applications- The first thing you need to do is determine the software applications you need to meter. This sounds easy, and it is, but you still need to spend a few minutes to think this through. Metering office can allow you to downgrade people from “Office Pro Plus” to “Pro” or “Standard” saving you money. While many employees will need Outlook, Word, Excel, and PowerPoint they may not need some of the more powerful tools such as InfoPath or Access.
    The K1000 metering tool allows you to view the usage of these items on per application and/or per suite, from a per machine basis. Once you’ve determined your applications simply browse to Inventory à Software Catalog à search and click on the desired software title. From the catalog detail page for the application, simply check the box “Metered” in the top left corner of the display window. This will meter application usage for all labels where metering is enabled. A Device label can be configured for metering in the Label Management screen, or at the time of creation.

    After you’ve performed these tasks it is simply a waiting game to see who uses these applications and how often. This can be gathered from a report or by way of individually viewing the software catalog for the metered item. Remember give more than a few days, and weeks really, for quality usage data to be collected.

  • Dell Big Data - Blog

    Introducing Dell QuickStart for Cloudera Hadoop

    by Sanjeet Singh

    At Dell, we recognize the importance of data as the new currency and as a competitive differentiator for customers. Data is being created and consumed at exponential rates, and organizations are scrambling to keep up. This is not unique to a single industry; it is affecting all vertical markets. 

    Today, we are excited to introduce Dell QuickStart for Cloudera Hadoop, which provides an easy starting point for organizations to begin a complete Big Data solution.

    Dell QuickStart for Cloudera Hadoop simplifies the procurement and deployment as it includes the hardware, software (including the Cloudera Basic Edition) and services needed to deliver a Hadoop cluster to start you on a proof of concept to begin managing and analyzing your data.