In September 5th 1977, NASA launched space probe named Voyager 1. The mission to observed our solar systems and beyond. It's still traveled with speed of 62.140 km/h and going untill unpowered its self.
They'd been sent an Indonesian Greeting male voice as follow, snap in to Golden Record on Voyager 1. it's currently travel to interstellar beyond our solar system and became the farthest an Indonesian male voice ever been sent by human
Adi Haryadi
Monday, October 23, 2023
Monday, March 17, 2014
How to access ESXi 5.1 console
When switching from ESX to ESXi one of the bigger downsides is the lack
of the Service Console (Tech Support Mode). There is however a basic commandline option on
the console which you can use for troubleshooting. Of course, for
accessing this commandline you need either physical access to the host,
or access to the host through a remote access card. We use IBM xSeries, so a IMM (Integrated Management Module) is what we'll use for this. First just access the console and unlock it by pressing F2 and
provinding the root credentials. Then use the keyboard to access the
menu item “Troubleshooting Mode Options”. Here you can do all sort of
things, but now we need to enable the ESXi Shell by selecting the option
and pressing ENTER. When done you'll see a notice on the right side
telling you the shell is enabled:
Then to access the shell you'll need to press ALT+F1 at the same time. Now that might be hard using an IMM so use the menu option to send the key combination to the server:
Now you'll see a login prompt where you can enter the root credentials again, and then issue, for example, a command to add a route to a specific host:
For reference, this is the command example so you can copy it:
Then to access the shell you'll need to press ALT+F1 at the same time. Now that might be hard using an IMM so use the menu option to send the key combination to the server:
Now you'll see a login prompt where you can enter the root credentials again, and then issue, for example, a command to add a route to a specific host:
For reference, this is the command example so you can copy it:
esxcfg-route -a 192.168.100.0/24 192.168.0.1 esxcfg-route -a 192.168.100.0 255.255.255.0 192.168.0.1
Wednesday, October 9, 2013
Special for those who hunger of IOPs
the Fastest in-production shipping SSDs - September 2013 - © STORAGEsearch.com | ||||
form factor | rackmount SSDs Earlier versions of this document only listed the single fastest rackmount SSD in each u-size (1U, 2U etc) - but there were some disadvantages in that method because the listings were dominated by RAM SSDs - even though flash SSDs represented the realistic choices made by most users. To make this rackmount SSD list more useful I've changed this section to include the top 3 or so companies in each market/ application silo. You'll have to look at each vendor's own offerings to get the exact specifications - but these are the vendors who are positioning themselves as the companies to beat in each of these market segments.
| |||
PCIe SSD cards | See also:- the article - the 3 fastest PCIe SSDs list - or is it really lists? And here's another thing you may be wondering about - how will Memory Channel SSDs affect the PCIe SSD market? | |||
Fusion-io | ioDrive Octal double-width card | PCIe | 1 million IOPS 6.2 GB/s of bandwidth. | |
Virident Systems | FlashMAX 1/2 height, 1/2 length | PCIe | 160K IOPS (4KB), 75/25 R/W, 1.6GB/s sustained write, 47us READ latency, 1.5 million read IOPS (512B) | |
Texas Memory Systems | RamSan-70 single slot card | PCIe | 600K / 250K R/W IOPS, R/W throughput - 2GB/s and 1.4GB/s respectively, latency 30 µs | |
OCZ | Z-Drive R4 - full height | PCIe | 2.8GB/s R/W throughput - 410K / 275K R/W IOPS | |
The specs below - for 3.5" SAS drives are performance indicators for the older 6Gbps products | ||||
3.5" | STEC | ZeusRAM SSD | SAS 6Gbps | under 23 microseconds average latency |
STEC | ZeusIOPS | SAS 6Gbps | 80,000 IOPS random read, 40,000 IOPS random write with transfer speeds of 550MB/s read and 300MB/s write. | |
2.5" | In the 2.5" form factor - the 2 competing interfaces with claims for the "dastest 2.5 inch SSD" will be the 2.5" PCIe and 12Gbps SAS. However, in the current state of the market - the very small number of 2.5" PCIe SSDs don't have impressive write performance - and are mostly pitched at read intensive applications. | |||
Samsung | XS1715 | 2.5" PCIe | 3GB/s read, 740,000 read IOPS | |
HGST | Ultrastar | SAS 12Gbps | 1.2GB/s read, 750MB/s write, and R/W IOPS of 145,000 and 100,000 respectively | |
SMART | Optimus | SAS 6Gbps | 100K/50K random IOPS and 500MB/s sustained R/W transfer rates | |
OCZ | Vertex 4 | SATA 3 | 95K / 85K random IOPS (4K blocks) and 535 MB/s throughput. | |
1.8" | SMART | Optimus | SAS 6Gbps | R/W speeds of 500 MB/s. IOPS is 45,000 for read, and 100,000 for write. |
1" | This form factor includes a diverse range of SSDs on a chip and modules which aren't all plug compatible. For indicative performances see the tiny SSDs page. | SATA | ||
USB | Renice Technology | R/W speeds of 400MB/s and 320MB/s respectively. |
PCIe SSDs
PCIe SSDs for use in enterprise server acceleration have been shipping in the market since 2007.
Over 40 companies already ship enterprise accelerator PCIe SSDs. That will rise to over 100 companies as the availability of more PCIe supporting SSD controller chips, other SSD related chip sets and IP and SSD software for this market will make it even easier than it already is for newcomers to enter the PCIe SSDs market.
PCIe SSDs come in several shapes and sizes. The most familiar form factor is cards, modules and racks. But a new form factor - for 2.5" PCIe SSDs which emerged last year will open up new applications - such as displacement of fast SAS SSDs.
In the 2nd half of 2013 - the start of another type of deployment for the PCIe interface in the M.2 form factor which is aimed at the consumer SSD market and SSD notebooks. These consumer products have throughputs similar to the enterprise products of 5-6 years earlier - but aren't rated for heavy IOPS. Nevertheless It wouldn't be surprised to see them appear as enterprise components in some future arrays in read-intensive design slots.
.....
(still) the standard for enterprise PCIe SSDs
by which all others are judged
ioDrives from Fusion-io
Thursday, July 11, 2013
Best Practices for SQL Server
Introduction
At VMworld 2008 in Las Vegas several of us in our virtual performance team met with a variety of customers to talk about Microsoft SQL Server. We already had a large base of customers running very many SQL Server DBs on our products and we wanted to collect information on the challenges posed in the process of virtualizing this critical workload. We were pleased to see that ESX Server handled SQL VMs with excellent performance. But, for many customers, the first efforts at virtualizing SQL didn't yield high-performing SQL VM. After careful investigation and many, many discussions we've started to put together the puzzle as to where SQL Server performance problems come from. This page will document these common problems, borrowing slides from our presentations on the subject.Virtualizing SQL: The Checklist
We've talked with dozens of customers in the past months to document the issues that resulted in poor SQL performance. Happily, none of the issues were due to underlying technologies. Here is a list of issues and an explanation of the impacts. These items are roughly listed in the order of decreasing likelihood of occurrence.Item 1: Configure Storage Correctly
Storage configuration problems are the number one cause of SQL performance issues. Usually these problems arise because the DBA requests a virtual disk of the VI admin, the VI admin places the VMDK on a LUN that may or may not meet the DBA's performance needs. For instance:- VMs' VMDK files placed on VMFS volumes without enough spindles.
- Many VMDK files placed on a single VMFS volume which could use more spindles.
- Database and log files placed on the same LUN which, you guessed it, could use more spindles.
This may be obvious to some, but this problem occurs again and again. The VI administrator should be aware of a few technical items that can help understand and avoid this problem:
- Based on the IO demands of the DB files, a certain number of spindles should be guaranteed to this file. This means that its VMDK must be placed on a VMFS volume to accout for the SQL Server's demands and all of the other demands on that volume.
- Mixing sequential activity (such as log file update) and random activity (such as database access) results in random behavior. This means that the LUN configuration in the pre-virtual physical environment may not be sufficient for the consolidated environment. This is discussed some in Storage Performance: VMFS and Protocols.
- When storage isn't meeting the SQL Server's demands, the device latency or kernel latency (queueing time) will increase. Read up on these counters in Storage Performance Analysis and Monitoring.
Item 2: Use Recent Hardware
Often companies that are dipping their metaphorical toes into
virtualization want to run proof-of-concept (POC) experiments to verify
that the virtual platform can meet their performance expectations. But
its surprising how many times these experiments are run on older,
poorly-performing hardware. Presumably the shiny, new systems were in
use for production applications so only the mothballed, cobweb-covered
servers from a previous generation were available for the POC. This
causes many problems. Check out this slide from a talk on SQL Server at VMworld Europe 2009:
The slide points out a couple of things. First, the larger caches and shorter pipelines on newer Intel processors results in a considerable drops in performance overheads. Second, the latency of the VMEXIT instruction, which determines the amount of time it takes to transition from the VM to the VMkernel, has shrunk by a large amount with subsequent generations of hardware. And don't forget the other additions from Intel and AMD such as hardware assisted memory management and IO virtualization.
Item 3: Follow SQL Server Best Practices
Microsoft has kindly provided a web page of best practices for SQL Storage configuration. These be practices should still be followed when configuring your virtual SQL deployments!
Item 4: Configure VM Identically to Native and Run The Right Test
For many SQL Server POCs the goal is to measure the VM's ability to perform, with respect to the virtual platform. If this comparison is to be performed, its critical that the VM be configured identically to the physical hardware. Obviously this means that the VM should be run on the same hardware using identically configured LUNs. Its also important to ensure that the VM has the same number of vCPUs and amount of memory as the physical baseline. This means restricting the number of pCPUs and amount of memory with NUMPROC and MAXMEM, respectively, in boot.ini.It also means that the test being applied should be understood. If a benchmark is chosen that uses a very small database, the content will be cached and the storage system won't be used. This can skew the results and produce recommendations not consistent with production deployments. Here is another slide from the same VMworld Europe 2009 presentation detailing some of what we know about the SQL Server benchmarking alternatives:
We at VMware prefer DVD Store.
Item 5: Use VMware's ESX Server
VMware's hosting products, VMware Server, VMware Workstation, and even VMware Fusion, are all capable of running SQL Server. But if the database is going to be run in production on enterprise-class hardware, use VMware's enterprise-class hypervisor: ESX Server. These products are not often confused by the initiated but rogue members of large companies often run off-the-books proof-of-concept experiments on VMware's hosted products. When they produce results they don't like, the results get spread throughout the company which can slow the virtual deployment.
Consider the following data, again from the VMworld Europe 2009 SQL Server presentation:
This information is getting a bit dated now, as it was performed years ago on ESX Server 3.0. But the point stands: before believing results claiming that "VMware cannot run SQL Server" its worth investigating the platform used to generate the results.
Item 6: Understand Memory Management and Configure Correctly
Database performance is heavily dependent on the amount of memory available. Almost without exception, providing more memory to SQL Server will improve performance. However, if that memory is coming from a host that is already over-committed or is being provided through workarounds to 32-bit limitations, performance may suffer. Here are a few keys for SQL Server memory management:- If more than 3 GB is desired, use 64-bit versions of the OS and application.
- If memory is over-committed on the box, set reservations for performance-critical SQL Server VMs to guarantee that those VMs' memory isn't ballooned or swapped out.
- If SQL Server's "lock pages in memory" parameter has been set, provide set the VM's reservations to the amount of memory in the VM. This setting can adversely interfere with ESX Server's balloon driver. Setting reservations will stop the balloon driver from inflating into the VM's memory space.
Item 7: Align Disk Partitions
This item is really a special but very important case of item two, follow best practices. Partition alignment can impact storage performance which can be critical to some SQL Server VMs' performance. See VMware's paper on partition alignment for more information on this.Monday, March 18, 2013
Free tools for VMware
- Veeam Backup & Recovery 6.5 free edition
- UniTrends Enterprise Backup Free Edition (Protect 4 VMs For Free) or Unitrends NFR Edition (2 sockets and 2 application-enabled servers FREE)
- Trilead VM Explorer Free VMware & Hyper-V backup (max 2 hosts)
- NexentaStor Community Edition Free 18Tb ZFS Virtual Storage Appliance
- VM Aware Database Performance
- vSphere Plugin Wizard 2.0
- VMware vCenter Mobile Access (vCMA) is a fully configured and ready to run virtual appliance that is required to manage your datacenter from mobile devices.
- VMware Boomerang is a radically simple client application that allows you to use multiple vSphere servers simultaneously
- VMware Guest Console (VGC)
- Cloud Cleaner
- Solarwinds VM to Could Calculator
- Onyx proxy between the vSphere Client and the vCenter Server. Monitors the network communication between the two and translates it into an executable PowerShell code.
- vCenter XVP Manager and Converter
- Veeam ONE Free Edition 24×7 real-time monitoring
- RVTools is a windows .NET 2.0 application which uses the VI SDK to display information about your virtual machines and ESX hosts
- VMTurbo Real-time monitoring and a library of 32 pre-defined historical reports
- vAlarm Free Desktop Tool for Monitoring vCenter Alarms
- vSphere 4 Client RDP Plug-in
- Xangati for vSphere VMware visibility and troubleshooting tool!
- vOPS™ Server Explorer
- Powergui Graphical User Interface & Script Editor for Microsoft Windows PowerShell
- vGhetto Script Repository
- UBERAlign free alignment of Virtual Machine disks (Nickapedia)
- Thinware vBackup
- PCoIP Log Viewer 2.0
- Quest Workspace Assessment Tool
- ESXi 5.0 / ESXi 5.1 Host Backup & Restore GUI Utility (PowerCLI based)
Wednesday, February 20, 2013
Subscribe to:
Posts (Atom)
Indonesian Greeting on Voyager Golden Record
In September 5th 1977, NASA launched space probe named Voyager 1. The mission to observed our solar systems and beyond. It's still trave...
-
Here's PC World's official (and entirely idiosyncratic) list of the top tech gadgets of the last half century. By Dan Tynan, speci...
-
NTLDR is missing Press Alt + CTLR + DEL I just received this after creating a VM (in my case Windows XP) on ESX 3.5. I jump back to the s...
-
In September 5th 1977, NASA launched space probe named Voyager 1. The mission to observed our solar systems and beyond. It's still trave...