Our community is talking about the new Dell Technologies. Join the discussion in the Dell EMC Community Network:
Following on from my previous blog about predicting the lifetime of your production database hardware now we’ll take a look at how Toad for Oracle DBA Suite can help you when you need to migrate hardware.
Fast forward a few weeks… you’ve bought a new server and maybe planning to upgrade an Oracle version too. No doubt your hardware provider has gone to great lengths to tell you how much faster the new hardware will be, and Oracle have been raving about the improvements they’ve made to the RDBMS. But rather than take their word for it I think the majority of DBAs would like to find out for themselves, ideally with their own data and database activity!
Another tool that the Toad for Oracle DBA Suite arms you with is Benchmark Factory. It allows you to capture the activity on your current production database and replay on your pre-production database. (Because, no matter how good your QA/UAT team are, they are very unlikely to be able to replicate the concurrent user workload of your production database).
Benchmark Factory will then give you a side-by-side comparison of the performance of the capture in production versus the replay in pre-production, allowing you to choose your performance metric of choice (StatsPack/AWR/etc.). To get the full picture you can use the performance monitoring pane in Spotlight against the pre-production database while the replay is running in Benchmark Factory to see how the different database components are behaving against production level activity, allowing you to see if the configuration of your Oracle database is correct.
On the theme of Oracle migrations, check out my blog on SharePlex, our Oracle database migration solution that can eliminate downtime and risk involved in migrating/upgrading Oracle.
So hopefully you can see how Toad for Oracle DBA Suite can give you the information you need when deciding on the future of your database hardware and help avoid any nasty surprises when you do decide to migrate!
I'm also hosting a webcast series around this topic at the moment if you'd like to join and find out more. See the link below:
About Denis O’Sullivan
Denis is a Systems Consultant with Dell Software in Cork, Ireland, covering our portfolio of database management tools such as Toad & SharePlex. Prior to joining Dell Software, Denis worked as an Oracle database administrator and application server administrator for over 7 years.
View all posts by Denis O’Sullivan | Follow Denis O'Sullivan on Twitter
Variant call process refers to the identification of a nucleotide difference from reference sequences at a given position in an individual genome or transcriptome. It includes single nucleotide polymorphism (SNPs), insertion/deletions (indels) and structural variants. One of the most popular variant calling applications is GenomeAnalysisTK (GATK) from Broad Institute. Often this GATK is used with BWA to compose a variant calling workflow focusing on SNPs and indels. After we published Dell HPC System for Genomics White Paper last year, there were significant changes in GATK. The key process, variant call step UnifiedGenotyper is no longer recommended in their best practice. Hence, here we recreate BWA-GATK pipeline according to the recommended practice to test whole genome sequencing data from mammals and plants in addition to human’s whole genome sequencing data. This is a part of Dell’s effort to help customers estimating their infrastructure needs for their various genomics data loads by providing a comprehensive benchmark.
The detailed configuration is in Dell HPC System for Genomics White Paper, and the summary of system configuration and software is in Table 2.
Table 1 Server configuration and software
40x PowerEdge FC430 in FX2 chassis
Total of 1120 cores: Intel® Xeon® Dual E5-2695 v3 - 14 cores
128GB - 8x 16GB RDIMM, 2133 MT/s, Dual Rank, x4 Data Width
480TB IEEL (Lustre)
Red Hat Enterprise 6.6
Cluster Management tool
Bright Cluster Manager 7.1
Short Sequence Aligner
sambamba 0.6.0, samtools 1.2.1
The current version of GATK is 3.5, and the actual workflow tested obtained from the workshop, ‘GATK Best Practices and Beyond’. In this workshop, they introduce a new workflow with three phases.
Here we tested out phase 1, phase 2A and phase3 for germline variant call pipeline. The details of commands used in benchmark are listed below.
bwa mem -c 250 -M -t [number of threads] -R ‘@RG\tID:noID\tPL:illumine\tLB:noLB\tSM:bar’ [reference chromosome] [read fastq 1] [read fastq 2] | samtools view -bu - | sambamba sort -t [number of threads] -m 30G --tmpdir [path/to/temp] -o [sorted bam output] /dev/stdin
sambamba markdup -t [number of threads] --remove-duplicates --tmpdir=[path/to/temp] [input: sorted bam output] [output: bam without duplicates]
java -d64 -Xms4g -Xmx30g -jar GenomeAnalysisTK.jar -T RealignerTargetCreator -nt [number of threads] -R [reference chromosome] -o [target list file] -I [bam without duplicates] -known [reference vcf file]
java -d64 -Xms4g -Xmx30g -jar GenomeAnalysisTK.jar -T IndelRealigner -R [reference chromosome] -I [bam without duplicates] -targetIntervals [target list file] -known [reference vcf file] -o [realigned bam]
java -d64 -Xms4g -Xmx30g -jar GenomeAnalysisTK.jar -T BaseRecalibrator -nct [number of threads] -l INFO -R [reference chromosome] -I [realigned bam] -known [reference vcf file] -o [recalibrated data table]
java -d64 -Xms8g -Xmx30g -jar GenomeAnalysisTK.jar -T PrintReads -nct [number of threads] -R [reference chromosome] -I [realigned bam] -BQSR [recalibrated data table] -o [recalibrated bam]
java -d64 -Xms4g -Xmx30g -jar GenomeAnalysisTK.jar -T BaseRecalibrator -nct [number of threads] -l INFO -R [reference chromosome] -I [recalibrated bam] -known [reference vcf file] -o [post recalibrated data table]
java -d64 -Xms8g -Xmx30g -jar GenomeAnalysisTK.jar -T AnalyzeCovariates -R [reference chromosome] -before [recalibrated data table] -after [post recalibrated data table] -plots [recalibration report pdf] -csv [recalibration report csv]
java -d64 -Xms8g -Xmx30g -jar GenomeAnalysisTK.jar -T HaplotypeCaller -nct [number of threads] -R [reference chromosome] -ERC GVCF -BQSR [recalibrated data table] -L [reference vcf file] -I [recalibrated bam] -o [gvcf output]
java -d64 -Xms8g -Xmx30g -jar GenomeAnalysisTK.jar -T GenotypeGVCFs -nt [number of threads] -R [reference chromosome] -V [gvcf output] -o [raw vcf]
java -d64 -Xms512m -Xmx2g -jar GenomeAnalysisTK.jar -T VariantRecalibrator -R [reference chromosome] --input [raw vcf] -an QD -an DP -an FS -an ReadPosRankSum -U LENIENT_VCF_PROCESSING --mode SNP --recal_file [raw vcf recalibration] --tranches_file [raw vcf tranches]
java -d64 -Xms512m -Xmx2g -jar GenomeAnalysisTK.jar -T ApplyRecalibration -R [reference chromosome] -input [raw vcf] -o [recalibrated filtered vcf] --ts_filter_level 99.97 --tranches_file [raw vcf tranches] --recal_file [raw vcf recalibration] --mode SNP -U LENIENT_VCF_PROCESSING
Torque/Maui is used to manage a large number of jobs to process sequencing samples simultaneously. Optional steps, 6, 7 and 8 in phase 1 were not included in the benchmark since Step 6 PrintRead took 12.5 hours with 9 threads for Bos Taurus sample (18 hours with single thread). These optional steps are not required, but these steps are useful for the reporting purpose. If it is necessary, it can be added as a side workflow to the main procedure. For each job, 9 cores were assigned when 120 concurrent jobs were processed concurrently and 13 cores were used for the test of 80 concurrent jobs.
In addition to the benchmark for human whole genome sequencing data published in the whitepaper, we gathered cow, pig, two sub-species of rice (japonica and indica) and corn reference genomes from Illumina’s iGenome site and Ensembl database. Fortunately, reference variant call data exist as a standard VCF file format for human, cow and pig. A variant data for japonica rice were obtained from 3000 Rice Genome on AWS and was modified according to the standard VCF file format. However, the chromosome coordinates in this VCF file do not match to the actual reference chromosome sequences, and we were not able to find matching version of reference variant information from public databases. For indica rice and corn, we gathered the variant information from Ensembl and converted them into a compatible VCF format. Whole genome sequencing data were obtained from European Nucleotide Archive (ENA). ENA Sample IDs in Table 1 are the identifiers allow to retrieve sequence data from the site. Although it is not ideal to test an identical input for large number of processes, it is not feasible to obtain large number of similar sample data from public databases.
Table 2 WGS test data for the different species: * x2 indicates the data is paired end reads. ƚTest ID column represent identifiers for the sequence data used throughout the test.
ENA Sample ID
Sample Base Count
Single file Size x2*
Reference Genome size (bp)
Depth of Coverage
Number of variants in Ref
Homo sapiens (human)
17 GB x2
54 GB x2
Bos Taurus (cow)
35 GB x2
12 GB x2
Sus scrofa (pig)
19 GB x2
10 GB x2
Oryza sativa (rice)
22 GB x2
4 GB x2
Zea mays (corn)
14 GB x2
After mapping and sorting of the sequence input files, quality statistics were obtained from the output files of Phase 1, Step 1. SRR17060031 sample is from bovine gut metagenomics study and was not well mapped onto Bos taurus UMD3.1 reference genome from Ensembl as expected. The majority of DNAs from bovine gut is foreign and has different sequence composition.
Table 3 Mapping qualities of sequence reads data; obtained by using ‘samtools flagstat’. ‘Total QC-passed reads’ is the number of reads passed the criteria of sequencing quality. Among all QC-passed reads, the number of reads actually mapped on a reference genome and its percentage is on ‘Mapped reads (%)’ column. ‘Paired in sequencing’ column is the number of paired reads properly paired by a sequencer. Among the reads properly paired in sequencing, the number of those paired reads mapped on a reference genome as paired reads is listed in ‘Properly paired (%) in mapping.
Total QC-passed reads
Mapped reads (%)
Paired in sequencing
Properly paired (%) in mapping
29,291,792 ( 3.60%)
28,813,072 ( 3.54%)
The rest of samples were properly aligned on the reference genome with high quality; more than 80% of reads paired in sequencing data properly mapped as pairs on reference genomes.
It is also important to check what level of mismatches exists in the aligning results. The estimated variance in human genome is one in every 1,200 to 1,500 bases. This makes 3 million base differences between any two people randomly picked. However, as shown in Table 4, the results are not quite matched to the 3 million base estimation. Ideally, 36 million mismatches should be shown in Hs1 data set since it covers the human reference genome 13 times. However, the rate of mismatches is quite higher than the estimation, and at least one out of two variants reported by the sequencing might be an error.
Table 4 The number of reads mapped perfectly on a reference genome and the number of reads mapped partially
Number of reads mapped with mismatches (mm)
Perfect match (%)
One mm (%)
Two mm (%)
Three mm (%)
Four mm (%)
Five mm (%)
Total run time is the elapsed wall time from the earliest start of Phase 1, Step 1 to the latest completion of Phase 3, Step 2. Time measurement for each step is from the latest completion time of the previous step to the latest completion time of the current step as described in Figure 1.
The running time for each data set is summarized in Table 4. Clearly the input size, size of sequence read files and reference genomes are the major factors affecting to the running time. The reference genome size is a major player for ‘Aligning & Sorting’ step while the size of variant reference affects most on ‘HaplotypeCaller’ step.
Table 5 running time for BWA-GATK pipeline
Bos taurus (cow)
Total read size, gzip compressed (GB)
Number of samples ran concurrently
Run Time (hours)
Aligning & Sorting
Generate Realigning Targets
Realign around InDel
Total Run Time
Number of Genomes per day
The running time of the current version, GATK 3.5 is overly slower than the version of 2.8-1 we tested in our white paper. Particularly, HaplotypeCaller in the new workflow took 4.52 hours while UnifiedGenotyper in the older version took about 1 hour. Despite of the significant slow-down, GATK team believes HaplotypeCaller brings a better result, and that is worthy for the five times longer run.
There are data issues in non-human species. As shown in Table 4, for the similar size of inputs, Hs1 and Ss1 show large difference in the running time. The longer running time in non-human species can be explained by the quality of reference data. Aligning and sorting process takes more than twice times in other mammals, and it became worse in plants. It is known that plants genomes contain large number of repeat sequences which make mapping process difficult. It is important to note that the shorter running time for HaplotypeCaller in rice does not reflect a real time since the size of the reference variant file was reduced significantly due to the chromosome length/position mismatches in the data. All the variant records outside of chromosome range were removed, but position mismatches were used without corrections. The smaller size of the reference variant information and wrong position information the running time of HaplotypeCaller shorter. Corn’s reference data is not any better in terms of the accuracy of these benchmark. These data errors are the major causes of longer processing time.
Nonetheless, the results shown here could serve as good reference points for the worst case running time. Once reference data are cleaned up by researchers, the overall running time for other mammals should be similar to the one from Hs1 in Table 4 with a proper scaling of input size. However, it is hard to estimate an accurate running times for non-human species at this moment.
Foglight for Java 5.9.8 and Foglight for .Net 5.9.8 were released on April 15th. The release contains enhancements, quality improvements and additional platforms for Java and .Net.
One feature I want to call your attention to is improvements on service building for Java and .Net servers. Specifically, on the Application Server Monitor dashboard, users can now select individual application servers and applications when they create a service. The service can now also be reused locally or globally. This makes it easy for administrators to create their own service groups for quick filtering by system, server or application.
We have also done some dashboard tweaking to improve UI performance on the Application Server views, and we have added the following new platforms:
WebLogic 12.1.3, 12.2.1 (12C)
JBoss EAP 6.4.0 to 6.4.4
.Net Framework Version 4.5.1, 4.5.2, 4.6.0, 4.6.1
If there are additional platforms that you are planning to implement this year, please contact me directly (firstname.lastname@example.org) so that I can prioritize your platforms into our test matrix.
Am I a Geek? Yes I thought, but then I started thinking that what geek used to mean and what it has grown to mean are probably different then what I had in mind. So being a Knowledge Management (KM) Manager I did the obvious, I searched. I searched Google for ‘what is a geek’, and it returned the following definition
Immediately after reading definition number one I though "check". Definition number two "ugh, no, not me". Verb definition yes, but I’m geeking out over things like KM, Knowledge Centered Support (KCS) and Star Wars. [By the way Dell Software Support has an awesome Support Portal and Knowledge Base, be sure to check it out here]
In the past year I’ve come to realize that my love for Star Wars is strongly shared by people other than my family. It’s widely licensed now so it’s easy to spot another Star Wars fan. This is great because it means that I no longer have to wear men’s Star Wars t-shirts, women’s Star Wars shirts are now widely available!! Though I’ll admit the cut is nice I still prefer the graphics on the men’s tee’s versus the feminized images on the women’s, which means I still steal Star Wars shirts from my husband. We are indeed Star Wars geeks. I love this topic because Geek Pride day May 25 shares the anniversary of the first Star Wars film, Episode IV: A New Hope, released on May 25, 1977. I wasn’t born yet… OK that’s an untruth J I was born but not old enough to sit still and watch a full length film. Eventually the love for the genius behind these films would fill my life, and my house.
Just a FEW of the items -
The creativity that George Lucas displayed in writing and creating these stories is amazing and I could write on and on about many aspects of this but since this is a short and sweet blog I’ll refrain. I will however tell you that the appreciation of the Star Wars Universe and expanded stories (films, books, comics, etc.) have allowed my family and I to share and enjoy something together. Something we all love "geeking out" about. Here is a picture of us at the 2015 Star Wars Convention. We would have loved to attend the 2016 Convention, but it’s in a galaxy far far away (London, England) and requires lots of Republic Credits – seriously, London is far from Colorado.
Hope you all have a wonderful geeky day. May the Force be with You!
About Monique Cadena
Monique has worked in the field of Knowledge Management (KM) and Knowledge Center Support (KCS) for over 19 years. She is recognized as an Innovator by the Consortium for Service Innovation (CSI).She been working for Dell since 2013 and is currently responsible for driving governance of the Dell Software KM program best practices and ensuring Dell Software is up-to-date with current KM, KCS and Social Support trends.
View all posts by Monique Cadena |
This is an optional hotfix and can be installed on the following vWorkspace roles -
The following is a list of issues resolved in this release.
HTML 5 session may occasionally freeze.
Microsoft Office application is launched a second time on reconnecting to a disconnected session.
Improvements to Web Access site security.
Copy/paste operation does not work in Microsoft Excel when using the HTML5 connector.
Failed to connect to RDP Proxy error when space in username
This hotfix is available for download at:
Let me introduce you to Al and Don, my two SQL Server DBA beach buddies.
Photo Credit: Alex Liivet Licensed under CC BY 2.0
Al administers about 60 SQL Server databases in his company. He spends most of his life monitoring the performance of his SQL Server environment. Every year around mid-June he starts singing, “Summertime Blues,” “Cruel Summer” and “This Summer’s Gonna Hurt,” and he continues singing them well into September.
Al spends most of his summer at his desk: leaning over his keyboard, cobbling together root cause analysis, wondering why he couldn’t see database problems before they hit, and guessing at which layer (OS, SQL Server, virtual, Azure, application) they reside in.
Al’s never going to get a sun tan. He ends every summer as pale as he started it.
Then there’s Don. He’s SQL Server DBA for about 90 databases in his company. Like Al, he’s responsible for monitoring SQL Server performance, but unlike Al, he doesn’t spend most of his life at it. What’s on his playlist this time of year? “All Summer Long,” “Summer Girls” and “Cool for the Summer.”
He has to deal with the same problems that fill Al’s day: running out of physical and virtual resources, poor session response time, high disk input/output, high CPU usage due to poorly written SQL statements and, most of all, users who are unhappy and unproductive because of those problems.
Yet Don finds plenty of time to work on his suntan.
Al uses a handful of different tools for his troubleshooting. He’s found some on the web, he uses utilities that come with SQL Server and he’s even put together a few tools on his own. But he has to put a lot of effort into bouncing from tool to tool when he’s trying to improve database performance and keep his users happy. If he plugs away at it long enough, he can fix what’s broken, but he never manages to prevent issues before they affect users and productivity.
And he doesn’t get much of a summer.
Don, on the other hand, got the memo a while ago and started using Spotlight on SQL Server Enterprise. He gets end-to-end monitoring, diagnosis and SQL Server optimization all rolled into a single tool. Even before users have the chance to complain about database performance problems, Don can see many of those problems coming and he can see which levels in his SQL Server environment are affected by them. He finds what he needs to fix and where to fix it.
Then he gets back to working on his tan.
Better yet, he saw a webcast called SQL Server DBAs: Do You Have Mobility Tranquility? in which Dell Software’s Peter O’Connell walked through the Spotlight mobile app. Don can receive and address specific notifications on his mobile devices from all of the instances he monitors, anytime and anywhere (including from the beach).
“Dude, this summer’s gonna rock,” says Don. “I’ll have fun, fun, fun ‘til my manager takes my smartphone away.”
How do you want your tan to look by the end of the summer? Like Al’s or like Don’s?
More important, how do you want to handle your SQL Server performance monitoring? Hunched over your keyboard Alt-Tabbing from tool to tool, or using a single mobile app to monitor your SQL Server environment and run SQL Server diagnostics?
We’ve recorded an on-demand webcast called SQL Server DBAs: Do you have Mobility Tranquility? If you’re already monitoring your SQL Server performance with Spotlight, join Peter O’Connell as he reviews the desktop functions you can now access through the mobile app. And if you’re new to Spotlight, Peter will introduce you to the wide range of SQL Server metrics you can monitor, plus the convenience of doing it through your mobile devices.
Sunscreen not included.
About Claudia Coleman
Claudia Coleman is a Senior Product Marketing Manager in the Dell Software Systems and Information Management Business Unit. With a focus on database monitoring and management technology, Claudia speaks to the issues important to the DBA and spends the majority of her time in the world of SQL Server.
View all posts by Claudia Coleman |
Follow #ThinkChat on Twitter Friday, May 27, 2016, at 11:00 AM PST, for a live conversation exploring the impact of what the future holds for you and your analytical approaches to business enhancement!
Join @DellBigData for the May #ThinkChat tweet up where we hope to explore and predict how we will want to leverage and expand the use of analytics in our lives – both at a data and metric level. After all, you can’t have advanced analytics without the fundamentals tackled and mapped. Bring your questions and your best practice experience to share with folks that are exploring the idea of analytics, and for those of us just getting started, bring your future visions of how things can and will be different by delivering analytics to the business. Whether you have ideas that involve delivery an acronym soup (ETL, BI, EDW, HDFS) or if you are more interested in sharing and learning how the business improves with the deployment of advanced analytics in the form of practical use cases or just questions about how to get started and where to go for more information-this Tweet Up can be the place for you. Meet, mingle, share and learn. We hope to see you there!
Join Shawn Rogers (@ShawnRog), marketing director for Dell Statistica, Joanna Schloss (@JoSchloss), BI and analytics evangelist in the Dell Center of Excellence, and David Sweenor (@DavidSweenor), product marketing manager at Dell Statistica for this month's #ThinkChat as we conduct a community conversation around your thoughts and real-life experiences!
Follow #ThinkChat on Twitter and join the conversation!
Where: Live on Twitter – Follow Hashtag #ThinkChat to get your questions answered and participate in the conversation!
When: May 27, at 11:00 AM PST
Questions discussed on this program will include:
1. What is “Advanced Analytics”? Why is it so important?2. Do you feel like your organization successfully uses analytics to make decisions?3. How would you describe your analytic solutions? BI, predictive, advanced analytics, reporting, spreadsheets?4. Does your business have data scientists/business analysts working toward analytic insight?5. How does your business share or distribute analytic content?6. Where do you go for analytic support?7. Where do you go to learn and investigate new analytical innovations? Books, Blogs, Companies Doing this well?8. What lessons or key takeaways have you learned from implementing your own analytic systems?9. What were the key metrics or milestones that made your projects successful?10. What was the most challenging obstacle for delivering on your analytic initiative?11. What are your favorite software tools, resources, or solutions you would recommend and why?
About Joanna Schloss
Joanna Schloss is a subject matter expert in the Dell Center of Excellence specializing in data and information management. Her areas of expertise include big data analytics, business intelligence, business analytics, and data warehousing.
View all posts by Joanna Schloss |
This blog post was originally written by Thomas Cantwell & Michael Schroeder.
In a previous blog, we demonstrated how to use Windows Deployment Services (WDS) to deploy a Nano Server VHD. Today, we'll focus on a manual deployment to the R730xd using a USB key. This deployment method will use legacy BIOS boot instead of UEFI boot. Note that some of the new features of Windows Server 2016 require UEFI boot, so in a future blog will provide the steps to deploy in UEFI mode.
For this walkthrough, we’ll be using the PowerEdge™ R730xd Rack Server.
The Windows Server 2016 Technical Preview 5 Image
First, we will need a bootable USB key. The TP5 evaluation image is ~5GB in size so an 8GB USB key will be sufficient for the complete iso.
We are using the Windows Server 2016 TP5 evaluation iso image for one purpose - to use the WinPE (Windows Preinstall Environment) setup environment to deploy the VHD image to the R730xd.
Using a utility like the Windows USB/DVD Download Tool, you can apply the Windows Server 2016 TP5 evaluation iso to a USB Key. A sample screenshot below shows how this tool makes quick work of the install to a USB key.
With the USB key created, we can now boot into WINPE to deploy our VHD image.
Next, build the Nano Server image (VHD) for the R730XD using the New-NanoServerImage cmdlet provided in the Nano Server folder on the TP5 iso image.
Tip - We deployed Windows Server 2016 TP5 in a Hyper-V VM as our build environment to assist building our Nano Server images. The reason for this is to ensure that when we import modules, our Nano build will be using the latest packages and code from Windows Server 2016. This is our standard practice to maintain latest code for both Windows Server 2016 and Nano Server. This also means you will see any new/changed features.
New-NanoServerImage -Edition Datacenter -DeploymentType Host -MediaPath E:
-BasePath .\Base -TargetPath .\NanoServerPhysical\NanoWDS01.vhd -OEMDrivers
The key to deploying to a physical host is the highlighted command above. The two options are “Host” (to deploy to a physical server), or “Guest” (for VM deployment).
Complete details on creating your Nano Server images can be found under the ‘Nano Server on a physical computer’ section on the Getting Started with Nano Server blog.
Once finished, copy your newly created VHD(s) to your USB key.
Insert the USB Key into a USB port on the R730XD.
When booting the R730xd, press F11 for a one time boot. This allows you to select which boot device you would like to boot from. At the boot menu, select the appropriate device that corresponds to your USB key.
Once you reach the Windows setup screen, press “Shift-F10”, this will open a command prompt window. Type “diskpart” and use the following steps to prepare the disk and the VHD for boot.
Note: This is a destructive process and will delete the contents of disk 0 on the system. At a high level, we need to do a few things: Create the partitions for the target drive, copy our VHD to the boot partition and attach the VHD image.
# Create two partitions and assign a logical letter to the boot partition
select disk 0
create partition primary size=300
format fs=ntfs quick
create partition primary
format fs=ntfs label="Nano" quick
Now copy the .VHD to drive C: from your USB key (ie. copy d:\tp5hv01.vhd c:\). The command “list volumes” in diskpart can be used to help identify logical letters assigned.
select vdisk file=c:\tp5hv01.vhd
# Cleanup - Detach the VHD
Now reboot the system to begin using Nano Server.
Installing Nano Server from a USB key is another quick and easy way to deploy to a server, and is required if the server is not networked (as may be the case in a development environment).
We’ll be providing additional Nano Server blogs with a more hardware focused perspective. Let us know if you have any specific ideas for what you would like to see at WinServerBlogs@dell.com or leave a comment below.
It is important to understand that Nano Server and Windows Server 2016 are still under development. This blog does not reflect what the final product will look like, nor Dell official support for Nano Server as a separate offering. This is for test purposes only.
PowerEdge R730xd Rack Server
Getting Started with Nano Server
This is a mandatory hotfix and can be installed on the following vWorkspace roles -
This release provides support for the following -
Parent VHD deployment sometimes allows more than one thread to initiate parent VHD copy
VMware Standard Clones with Quick Prep does not retain MAC address
Hyper-V Load Balancing not working properly
Hyper-V Catalyst Service cannot restart properly after crashing.
Data Collector hangs if unhandled exception is thrown
Hyper-V hosts all experience blue screen at same time
VMware Standard Clones with Quick Prep has Bad Inventory Path
vWorkspace doesn't support SMB shares with Highly available VMs
Standard Clones not working correctly with VMware vSphere environments
This hotfix is available for download at https://support.software.dell.com/vworkspace/kb/203815
Microsoft Windows Server 2016 Technical Preview 5 was recently released for customer evaluation and may represent the last public release until the product is finalized later this year. While you can kick the tires with the new preview bits, it’s also helpful to have a broad understanding of all the new features and technology coming with this release because the scope is quite large. Here are a set of helpful MS videos that shine a light on the features, and share some of the strategy and vision for what’s being put into Windows Server 2016. My personal favorite is #10: Nano Server with Jeffery Snover, Lead Architect at Microsoft, who recaps the history of Windows Server and shares the key scenarios that Nano Server is focused on.
Ten reasons you’ll love Windows Server 2016
If you would like to learn more about Dell’s efforts around a few of these topics (SDS, Nano Server & Security) you can read more on our Dell TechCenter Microsoft Windows Server 2016 Wiki.