2016/03/18

Security Session 2016 (Brno, Czech republic)

Po roce Vás opět rádi uvidíme již na šestém ročníku nekomerční konference Security Session.

Konference se koná 2.4. (sobota) od 09:00 na adrese Božetěchova 1, Brno.

Na našem webu můžete najít program konference a workshopů, jako i možnost registrace na konferenci.

Vstup na konferenci je zdarma po předchozí registraci. Kvůli omezené kapacitě workshopů, bude v hale seznam, kam se budete moct zapsat, takže neváhejte přijít včas, protože opravdu stojí za to.

Další informace můžete sledovat na našem Facebook, či Twitter profilu.

Budeme se na Vás těšit!

- - -

We will be happy to see you on the sixth noncommercial conference Security Session!

The Conference will be held on April 2 (Saturday) from 09:00 at Božetěchova 1, Brno, Czech republic.

On our site you can find the conference program and workshops, as well as the opportunity to register for the conference.

Entry to the conference is free after the registration. Due to the limited capacity of the workshops, there will be the list in the lobby, where you will be able to enroll, so feel free to come early because it is really worth it!

Further information can be seen on our Facebook or Twitter profile.

We look forward to seeing you!

2016/02/08

Hijacking forgotten & misconfigured subdomains

Hey netsec folks,

it's been a while since my last blog post, so I decided to release a new tool ;)
I think that we need more articles about "DNS hacking", I hope that you will learn something new here.

If you missed an awesome article about An XSS on Facebook via PNGs by @fin1te, I highly recommend you to read it. It's truly an awesome job. Let's me quote a small part here:

Moving from the Akamai CDN hostname to *.facebook.com
Redirects are pretty boring. So I thought I’d check to see if any *.facebook.com DNS entries were pointing to the CDN.
I found photo.facebook.com (I forgot to screenshot the output of dig before the patch, so here’s an entry from Google’s cache):
OK, I think that we need a highlight here. How to find DNS entries that are pointing to another domain? What is "dig" and how we can use it during a pentest or a bug bounty program? If you already know the answers to these questions, you can skip this post and save some time here ... Thank you :)

What is a CNAME record?

CNAME stands for Canonical Name. CNAME records can be used to alias one name to another, so it must always point to another domain name, never directly to an IP-address. DNS CNAME records are specified in RFC 1034 and clarified in Section 10 of RFC 2181.

For example, if you have a server where you keep all of your documents online, it might normally be accessed through docs.example.com. You may also want to access it through documents.example.com. One way to make this possible is to add a CNAME record that points documents.example.com to docs.example.com. When someone visits documents.example.com they will see the exact same content as docs.example.com. Just to clarify, you can point a CNAME record to any different domain name, but CNAME records that point to other CNAME records should be avoided.

I hope that now you can understand how a photo.facebook.com was pointed to the Akamai CDN via CNAME. You can learn more about a DNS zone with the links on the end of this post. Don't be afraid, it's not rocket science.

How to analyze CNAME records?

Assuming you are on Linux, you should use "dig" mentioned earlier. dig (domain information groper) is a network administration command-line tool for querying DNS. You can also use "nslookup", but it's almost the same. See the man page for more info.

Basic usage to query for any type of record information in the domain blogger.com:
$ dig blogger.com any


; <<>> DiG 9.9.5-9+deb8u5-Debian <<>> any blogger.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38292
;; flags: qr rd ra; QUERY: 1, ANSWER: 11, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;blogger.com. IN ANY

;; ANSWER SECTION:
blogger.com. 299 IN AAAA 2607:f8b0:4001:c20::bf
blogger.com. 299 IN A 209.85.147.191
blogger.com. 3599 IN MX 10 alt1.gmr-smtp-in.l.google.com.
blogger.com. 21599 IN NS ns4.google.com.
blogger.com. 21599 IN NS ns2.google.com.
blogger.com. 3599 IN MX 10 alt2.gmr-smtp-in.l.google.com.
blogger.com. 3599 IN MX 5 gmr-smtp-in.l.google.com.
blogger.com. 21599 IN NS ns1.google.com.
blogger.com. 3599 IN TXT "v=spf1 include:_spf.google.com ?all"
blogger.com. 59 IN SOA ns4.google.com. dns-admin.google.com. 114237278 900 900 1800 60
blogger.com. 21599 IN NS ns3.google.com.

;; Query time: 204 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Tue Feb 09 20:41:08 EST 2016

;; MSG SIZE  rcvd: 329


The dig command output has the following sections:

Header: This displays the dig command version number, the global options used by the dig command, and few additional header information.

QUESTION SECTION: This displays the question it asked the DNS. i.e This is your input. Since we said ‘dig any blogger.com’, we can see any of the records, the default type dig command uses is A record.

ANSWER SECTION: This displays the answer it receives from the DNS. i.e This is your output. This displays the any records of blogger.com. You can see for example name servers.

Stats section at the bottom displays few dig command statistics including how much time it took to execute this query

Digging for CNAME records

Now you know basic terminology, so let's practice. We can use an awesome project ZoneTransfer.me from digi.ninja. It's a domain prepared for DNS testing. For example:

$ dig CNAME testing.zonetransfer.me

; <<>> DiG 9.9.5-9+deb8u5-Debian <<>> CNAME testing.zonetransfer.me
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31534
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;testing.zonetransfer.me. IN CNAME

;; ANSWER SECTION:
testing.zonetransfer.me. 300 IN CNAME www.zonetransfer.me.

;; Query time: 557 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Tue Feb 09 21:05:04 EST 2016

;; MSG SIZE  rcvd: 70

As you can see in the answer section, there is a subdomain testing.zonetransfer.me with CNAME record pointing to www.zonetransfer.me. But how I know that there is such a subdomain? Well, there are various method to find subdomains for a target we are testing. As I am a Perl guy, I will recommend you Fierce.pl from RSnake, that is using various methods from simple dig to bruteforce.

But what about dig?

You need to know about a zone transfer. See, that's why it's called the (zonetransfer.me). Usually a zone transfer is a normal operation between primary and secondary DNS servers in order to synchronise the records for a domain.

The data contained in a DNS zone may be sensitive from an operational security aspect. This is because information such as server hostnames may become public knowledge, which can be used to discover information about an organization and even provide a larger attack surface.

In 2008 a court in North Dakota, USA, ruled that performing a zone transfer as an unauthorized outsider to obtain information that was not publicly accessible constitutes a violation of North Dakota law.

First of all, we need to know the nameservers to query:

$ dig ns zonetransfer.me

; <<>> DiG 9.9.5-9+deb8u5-Debian <<>> ns zonetransfer.me
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 41543
;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;zonetransfer.me. IN NS

;; ANSWER SECTION:
zonetransfer.me. 7199 IN NS nsztm2.digi.ninja.
zonetransfer.me. 7199 IN NS nsztm1.digi.ninja.

;; Query time: 720 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Tue Feb 09 21:17:16 EST 2016

;; MSG SIZE  rcvd: 96

Each Name Server can checked remotely for a zone transfer of the target domain. It is often the case that even though the primary name server blocks zone transfers, a secondary or tertiary system may not be configured to block these - hence the check of each name server.

To perform a zone transfer,we will ask the primary nameserver for a DNS query type AXFR (Asynchronous Transfer Full Range):

$ dig @nsztm1.digi.ninja axfr zonetransfer.me

If a zone transfer is allowed without authorization, you should see list of all the subdomains.

How to abuse "AXFR"?

While searching a material for this research, I have found that internetwache.org made a quick scan of Alexa's Top 1M for AXFR. The results were interesting:

  • 132854 AXFRs were made
  • 72401 unique domains were affected
  • 48448 unique nameservers were affected

Even the US CERT published an alert about it. Whether an attacker or penetration tester; they will attempt to map the footprint of the organization in order to find areas of weakness to exploit. Usually the information collected is host names, IP addresses and IP network blocks that are related to the targeted organization. A successful zone transfer will make this mapping much easier.

How to abuse CNAME records?

So now we know how to use dig for a basic DNS queries. We can retrieve list of subdomains using axfr and we know that there is a lot of nameservers that will allow a zone transfer. And we can also check the CNAME records, what could possibly go wrong?

What about expired subdomains? Is it possible that there are subdomains with CNAME records pointing to the expired websites? If so, one can register expired domain and the CNAME record will work again, but not how the owner wish ... Hijacked subdomain will serve the content from the attacker!

Today I am releasing a simple tool that can do exactly this. subHijack is Perl script that can automate a zone transfer queries, analyze CNAME records and check expired domains.

Disclaimer: This is not a hacking tool. Don't use this against a domains without a permission to test. This is only for educational purposes, however it can be handy during a bug bounty program or pentesting.

https://github.com/vavkamil/subHijack

- - -

https://en.wikipedia.org/wiki/CNAME_record
https://support.dnsimple.com/articles/cname-record/
https://support.dnsimple.com/articles/differences-between-a-cname-alias-url/
http://linux.die.net/man/1/dig
https://en.wikipedia.org/wiki/Dig_(command)
http://www.thegeekstuff.com/2012/02/dig-command-examples/
https://digi.ninja/projects/zonetransferme.php
https://en.wikipedia.org/wiki/DNS_zone_transfer
https://hackertarget.com/zone-transfer/
http://www.hackersgarage.com/fierce-dns-analysis-perl-script.html
https://www.net-dns.org/docs/Net/DNS.html
https://github.com/internetwache/Python-AXFR-Test


2015/06/04

Blogger ~ Server Side Browsing

I recently came across a fabulous presentation called "Server-side browsing considered harmful" from Nicolas Grégoire. I highly recommend to read it first:
http://www.reddit.com/r/netsec/comments/37br5h/serverside_browsing_considered_harmful/

This site is running on a blogger.com service, provided by Google. I was pen-testing blogger a lot, so I am familiar with it. Immediately after re-reading this presentation, I was going to check it here :)

On Blogger homepage, you can manage blogs you're following.
It's basically RSS feed for blogs. There is a web form to provide URL address of a blog you want to follow and an option to follow it publicly or anonymously. But it's broken by design. It will fetch any given URL and look for a feed. Logic behind this functionality is to add new feed to your "personal rss reader" and show web page title. But it will show the web page title even if no feed was found together with error message.



There is no black list at all. So I was able to fingerprint ports on localhost. I have found two internal blogs, one on a port 80 and one on 8000.



http://127.0.0.1 -> Title: File Empire
http://127.0.0.1:8000 -> Title: Business & Human Rights Resource Centre

At this point I was able to fingerprint local web services. With enough time or crawler one can determine internal IP address and begin to scan internal network. Imagine cracking http basic auth as I did in previous post or find a service vulnerable to CSRF ...


Thanks to Nicolas Grégoire for this presentation. As you can see, there is still a huge potential to find similar vulnerabilities. For example RSS feeds, online SEO tools, image upload services and so on.

Response from Google Security team:

Regarding our Vulnerability Reward Program, the panel decided this issue has very little or no security impact, and therefore we believe that it is not in scope for the program, so we won't be issuing a reward, nor ranking in our hall of fame.