Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
We get asked from time to time how to deliver a deployment that only includes Pre-installation tasks. Perhaps you want to format devices before sending them out for donation/recycling or you want to perform an Offline User State migration without deploying an OS onto the device. Whatever the reason, this can be accomplished in a few quick steps!
When you boot to your KBE, this “deployment” will be available to execute tasks you have associated. When the deployment reaches the OS installation portion of the deployment, it will fail and you will want to shutdown or restart the computer using the options provided.
This blog was originally written by Thomas Cantwell, Deepak Kumar and Gobind Vijayakumar from DELL OS Engineering Team.
Dell PowerEdge 13G servers (Dell PowerEdge 13G servers) are the first generation of Dell servers to have USB 3.0 ports across the entire portfolio. This provides a significant improvement in data transfer speed over USB 2.0. In addition, it offers an alternative method to debug Microsoft Windows OS, starting with Windows 8/Windows Server 2012 and later.
Microsoft Windows versions have had the ability to kernel debug using USB 2.0 since Windows 7/Windows Server 2008 R2, though there were some significant limitations to debugging over USB2.
1) The first port had to be used (with a few exceptions – see http://msdn.microsoft.com/en-us/library/windows/hardware/ff556869(v=vs.85).aspx ).
2) Only a single port could be set for debugging.
3) You had to use a special hardware device on the USB connection. (http://www.semiconductorstore.com/cart/pc/viewPrd.asp?idproduct=12083).
4) It was not usable in all instances – in some cases, a reboot with the device attached would hang the system during BIOS POST. The device would have to be removed to finish the reboot and could be reattached when the OS started booting. This precluded debugging of the boot path.
USB 3.0 was designed from the ground up, by both Intel and Microsoft, to support Windows OS debugging and has much higher throughput and is not limited to a single port for debugging.
As previously stated, Dell PowerEdge 13G servers will now support not only USB 3.0 (also known as SuperSpeed USB), but also USB 3.0 kernel debugging.
BIOS settings -
Enter the BIOS and enable USB 3.0 – it’s under the integrated devices category (By default, it is set to Disabled).
USB 3.0 drivers are native in Windows Server 2012 and Windows Server 2012 R2. There is no support for USB 3.0 debugging in any prior OS versions.
Steps to Setup Debugging Environment -
2.On the target system with the help of the USBView tool, locate the specific xHCI controller which supports USB debugging. This tool is the part of the windows debugging tools. See the following link: http://msdn.microsoft.com/en-in/library/windows/hardware/ff560019(v=vs.85).aspx .
To run this tool, you must also install .Net 3.5. It is not installed by default on either Windows 8/2012 or Windows 8.1/2012R2.
2.Next, you need to find the specific physical USB port you are going to use for debugging.
2. Connect the host and target system by using the USB A-A crossover cable. Use the port identified at above.
Steps to Start the Debugging Session :-
Open compatible version of WinDbg as administrator (very important!).
4.USB debug driver will be installed on host system. This can be checked in the Device Manager (see below) and for successful debugging there should not be a yellow bang on this driver. It is OK that it says “USB 2.0 Debug Connection Device” – this is also the USB 3.0 driver (works for both transports). This driver is installed the first time USB debugging is invoked.
In Device Manager:
Summary – Dell PowerEdge 13G servers with USB 3.0 provides a significant new method to debug modern Windows operating systems.
Housed at the Australian National University (ANU) in Canberra, the National Computational Infrastructure (NCI) has routinely hosted many of the country's scientific organizations, which routinely perform data-intensive computations on the university's Raijin supercomputer. Facing greater and more complex demand, NCI was recently tasked with building an on-demand, high-performance "science cloud" node.
The need for the compute environment to react easily when highly data-intensive information is suddenly fed through the supercomputer - even when it is running at 90 percent to full capacity - and the ability to string together existing complex environments were imperative. These requirements meant the Open-Stack based cloud had to be designed to be flexible in order to support such dynamic workflow processes.
To create the science cloud, NCI used a 3,200-core compute cloud using Intel Xeon CPUs housed within 208 Dell PowerEdge C8220 and C8220X compute nodes, 13 Dell PowerEdge C8000XD storage nodes, and PowerEdge R620 rack servers. This allowed NCI to meet its design goals of establishing a cloud capability complementing and extending the existing investment in supercomputing and storage, while addressing any potential end-user shortcomings.
The Australian government has provided over $100 million (Australian) into developing the hardware infrastructure, and necessary support tools and virtual laboratories.
You can read about the NCI's science cloud in greater detail here.
by Joey Jablonski
Recently the Enterprise Strategy Group (ESG) published an interesting white paper on Dell's big data and analytics solutions. Geared toward an important segment of the market that heretofore has been underserved, the company's versatile array of products and services are seen as a faster way for customers to realize value from any number of data sources.
The white paper highlights three important themes of:
The ESG white paper argues that these big data and analytics solutions offer a large, yet still relatively untapped market, the opportunity to focus on the business outcomes to be realized rather than the complexity of IT needs.
The ESG white paper in its entirety is attached.
In my last post I described the Toad™ DBA Suite for Oracle® technical brief we recently released, and how the Toad DBA Suite can make performance management – diagnosis, tuning SQL and indexing – much easier for you.
In this post I’ll summarize the section of the technical brief on database maintenance, which includes common tasks like running scripts, generating reports, assessing security threats and comparing schemas and databases. As in the last post, you can accomplish all these tasks in Oracle Enterprise Manager (OEM) alone, but I think it takes too long. We built Toad DBA Suite so you can do it more efficiently.
Creating a table similar to an existing table – with easier navigation
This simple, common task brings up two typical advantages of Toad DBA Suite: Not only does it take fewer steps to execute, but you don’t have the overhead of launching a new web page at every step. You open the Schema Browser, right-click the desired table and select Create Like.
You’re accomplishing the same thing in OEM, but with a new web page at almost every step: Targets, Databases, Instance, Administration, Schema Tables, etc. It may be convenient for maintaining a database from a computer anywhere in the world, but if you do most of your database administration from your regular workstation, it’s a big downside for a little upside.
Getting an overview of all your databases – in one window
Suppose you want an overview of the databases you’re managing, several of which may be running on the same host. You’re likely to want information on the data files, memory layout or top sessions.
With OEM you can find CPU information on the Database Home Page, but memory and tablespace layout are on two other pages.
The Database Browser in Toad DBA Suite shows you information on all of your managed databases, combined in a single view, with access to database and schema objects in the navigator. It gives you a bigger picture of all your managed databases instead of a picture of just one database at a time.
Mind you, each of these products has database maintenance features that the other does not have, and page 11 of the technical brief includes a detailed feature comparison. But besides the convenient overview in the Database Browser, Toad’s other big advantage in database maintenance is the Automation Designer, which lets you set up a group of tasks and run them against multiple managed databases, rather than running them separately for each database. That can save you a lot of time and drudgery.
One more thing: If you ever needed to run any of these tasks while you’re out of the office, MyToad enables you to run Apps and Actions created in Automation Designer remotely from a mobile device.
The K1000 metering tool received a major upgrade in 5.5 with the addition of the new Software Catalog. The catalog allows you to more precisely meter suite application than ever before. This is a simple 3 step process to begin metering your applications to see long term usage data to help you save money come license renewal time. In 6.0 that same catalog improved application control also, but we’ll talk about that another time.
To meter applications- The first thing you need to do is determine the software applications you need to meter. This sounds easy, and it is, but you still need to spend a few minutes to think this through. Metering office can allow you to downgrade people from “Office Pro Plus” to “Pro” or “Standard” saving you money. While many employees will need Outlook, Word, Excel, and PowerPoint they may not need some of the more powerful tools such as InfoPath or Access.The K1000 metering tool allows you to view the usage of these items on per application and/or per suite, from a per machine basis. Once you’ve determined your applications simply browse to Inventory à Software Catalog à search and click on the desired software title. From the catalog detail page for the application, simply check the box “Metered” in the top left corner of the display window. This will meter application usage for all labels where metering is enabled. A Device label can be configured for metering in the Label Management screen, or at the time of creation.
After you’ve performed these tasks it is simply a waiting game to see who uses these applications and how often. This can be gathered from a report or by way of individually viewing the software catalog for the metered item. Remember give more than a few days, and weeks really, for quality usage data to be collected.
by Sanjeet Singh
At Dell, we recognize the importance of data as the new currency and as a competitive differentiator for customers. Data is being created and consumed at exponential rates, and organizations are scrambling to keep up. This is not unique to a single industry; it is affecting all vertical markets.
Today, we are excited to introduce Dell QuickStart for Cloudera Hadoop, which provides an easy starting point for organizations to begin a complete Big Data solution.
Dell QuickStart for Cloudera Hadoop simplifies the procurement and deployment as it includes the hardware, software (including the Cloudera Basic Edition) and services needed to deliver a Hadoop cluster to start you on a proof of concept to begin managing and analyzing your data.
For this launch, Dell is building on its deep expertise and relationship working with Cloudera and Intel. The solution represents a unique collaboration within the big data ecosystem to collectively deliver an easy and affordable way for customers to get started now with a big data journey.
The key components of Dell QuickStart for Cloudera Hadoop include:
With Big Data becoming an increasing priority for organizations around the globe, we are excited to offer this unique solution to our customers. If you are interested in this, or other ways we can serve you, please reach out to us via email at Hadoop@dell.com, or visit dell.com/Hadoop.
More and more organizations are evaluating Hadoop to determine how best to adopt these new capabilities to enable better insight into data. Many IT organizations have mandates from their CIO and CTO leadership to become familiar with Hadoop and determine how best it would fit into the existing IT landscape. But this familiarity takes resources – resources in the form of time, hardware costs, and training investments.
As with any new technology, Hadoop cannot simply be plugged into existing IT environments. Existing environments have established processes, patterns, existing applications and best practices that must often be modified to accommodate new technologies. IT teams have to fully test new technologies in their specific environments before releasing them for use by the wider organizations IT supports. Stability and performance to the business is paramount for any organization.
Dell QuickStart for Cloudera Hadoop enables organizations to quickly engage in Hadoop testing, development and proof of concept work. Through the combination of Dell PowerEdge servers, Cloudera Enterprise Basic Edition and Dell Professional Services, organizations can quickly deploy Hadoop and enable development and application teams to test business processes, data analysis methodologies and operational needs against a fully functioning Hadoop cluster.
With the added flexibility of the Dell Professional Services, you can choose the right combination of training, installation and application development that is right for your organization. By adjusting the combination of services to the unique needs of your organization, you can ensure that your team gets the most out of QuickStart, and quickly determines how best to grow to production usage of Hadoop.
Dell QuickStart for Cloudera Hadoop enables your organization to quickly deploy Hadoop, integrate it with your environment and enable a broad range of staff in the organization to begin leveraging Hadoop to determine the best fit for production deployments and processes.
Docker has been taking the software developer world by storm over the last year. As you can see the Google Trends plot below for the term “Docker,”, interest in either the open source project or in the eponymous company has been very high. Further, last week we learned that Docker, the company behind the open source project, has raised $40M in new funding. For those like me who are not directly involved in software development, should we care about this trend? The answer is yes you should, particularly if you are involved in planning, strategizing, or otherwise interested in Cloud. The capabilities represented by Docker technology are a great fit with the vision and promise of Cloud, and could have significant impact on its development from here.
Google Trends plot for “Docker” requested on Sept. 22, 2014. Let’s assume Dockers, the pants manufactured by Levi Strauss & Co, are not responsible for the large increase in search since 2013.
Containers and Docker are changing how developers think about building software
First, for those not familiar with Docker, I offer a quick primer. Docker is, according to Docker.com, an “open platform for developers and system administrators to build, ship, and run distributed applications.” Now, as a non-developer, and admittedly I have not been a system administrator in eight years, what does this mean? As many things go in computing, what we think is new is actually built on something old, and in this case Docker comes from the ability of the Linux operating systems to “containerize” software programs. Linux containers (often abbreviated to LXC) allow you to run multiple programs in parallel on top of a single Linux operating system. Rather than thinking of this as virtual machines where each VM rides on top of a hypervisor and each with its own guest operating system, LXC gives each container its own bit of physical server resources (like CPU, memory, and I/O) directly from the host. Further, all the containers can share the same host Linux operating system, thereby removing the need for a guest OS as we find in the VM model. The interesting part is any special bit of code not normally part of the base operating system, often call a dependency, can be packaged up inside the container along with the code. That’s a lot to think about, so below is a graphic borrowed from Docker.com to aid in understanding the differences between VM’s and containers.
Graphic showing the difference between traditional virtualization as the industry has come to understand it, and the Docker container model taken from www.docker.com/whatisdocker.
There are a few great things about this model if you happen to be a developer. First of all, as a developer, it seems to me you really just want to get to the creative part of your job, which is to write code that solves interesting problems. In the simplest of terms, you want to write code, test it, and then put it into use in “production.” Docker is great at this, because you can write and test code inside containers running on your laptop, then publish those same containers into production. The application has a very high likelihood to run in production just as well as it did on your laptop since nothing has changed as far as the program cares about (i.e. the underlying operating system, the code, and the dependencies are all the same).
Now, life is not that simple and applications are rarely written as one large program that can be completely defined within a single container. This is where Docker really helps developers get to the fun part of their job. Since the value of Docker is the ability to maintain a well-defined, and reproducible, unit of code that can be transported, it supports the notion of micro-services. Micro-services are apps that perform a well-defined function, such as a database service or a web server, which individually handle a portion of a larger programmatic goal. Micro-services expose themselves to the outside world through simple API’s. So as a developer, it would be great if I don’t need to reinvent and hence recode these services each time I create a new program. I can just grab these micro-services as containers, and link them together with my own containerized micro-service to build a larger application. Developers have been using tools such as GitHub as a place to store and share reusable code. Now, Docker Hub performs this same function for entire containers in the Docker ecosystem. Today, Docker Hub Registry contains 30,000+ such reusable Docker containers (see https://registry.hub.docker.com/).
The implication for Cloud
You can see why Docker makes developers happy, and hence change the way they think about building software. I have hinted on how this technology can change how apps are built and how they get deployed. Rather than think of cloud as a place to run VM’s, we can now think of the cloud as a place to run micro-services. These micro-services, running in Docker containers, can be moved from cloud to cloud and scale up and down without worrying about rebuilding and patching guest operating systems. Hence, the future will be micro-services loosely coupled that need integration and orchestration. Some newly formed companies are already addressing this space, and it will be interesting to see how this flushes out. As Don Ferguson noted in his blog, this trend may be seen as the merging of PaaS and IaaS together as a single layer. We will also need tools for testing and deployment of multiple container applications. Cloud hosters will potentially have more latitude to move workloads around because code will be running in smaller more manageable pieces, rather than big monolithic VM’s. In theory, the underlying cloud hardware could be changed more frequently as well. This kind of reality is sounding more cloud like than our VM paradigm of today.
Welcome to Big Data at Dell!
In the coming days and weeks, we look forward to offering you the very latest information about the big ideas, cutting edge innovations, and exciting news coming out of the Big Data industry. Whether it's a video, an industry wrap up or an intriguing white paper, our expert contributors will ensure you are well-informed and aware of up-to-the minute industry updates.
Managing all of the information around Big Data can be challenging. Big Data at Dell is your solution.
Visit often to see what's new.