본문 바로가기

Python/Python Basic

크롤링(Crawling)_03 (HTML page crawling 실습)

HTML page crawling 실습

- 순서

1. browser로 크롤링 대상 사이트 접속

2. 원하는 data 확인

3. [개발자도구]를 이용하여 request 방식, URL, data의 위치(태그) 확인

4. request 모듈로 request 전송 및 response 처리

** urllib, urllib2 등 여럿이 존재하지만 여기서는 requests 사용

5. BeautifulSoup 모듈로 response parsing하여 원하는 data 부분을 추출

 

 

BeautifulSoup을 이용하여 tag 검색하기

- html 태그를 여러 조건 (class, id 혹은 기타 정보 등)을 이용하여 추출

* find/fild_all 함수 : 태그 검색 조건을 명시하여 태그 (전체) 검색

   find - 가장 먼저 검색되는 태그 반환 / find_all - 전체 태그 반환

* css selector : CSS 선택자를 명시하여 태그 (전체) 검색

- BeautifulSoup이 트리구조를 바꾸어 주고, find 함수가 원하는 값을 찾아옴

 

 

#1. NAVER 뉴스 크롤링

 

예제 #1. 속성의 이름이 한개인 경우

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
import requests
from bs4 import BeautifulSoup
 
url = 'http://news.naver.com/main/read.nhn?mode=LSD&mid=shm&sid1=105&oid=469&aid=0000209556'
 
res = requests.get(url)
html = res.content
# print(html) # html형식의 data가 출력됨
 
soup = BeautifulSoup(html, 'html5lib')
title = soup.find('h3', id = 'articleTitle')
# 혹은 title = soup.BeautifulSoup('h3', attrs = {'id' : 'articleTitle'})
 
print(title)
print('\n')
print(title.get_text())
 
==========================<<실행결과>>==========================
 
<h3 class="font1" id="articleTitle">단통법 보조금 상한제, 이달 조기 폐지 어려울 듯</h3>
 
 
단통법 보조금 상한제, 이달 조기 폐지 어려울 듯
cs

 

예제 #2. 속성의 이름이 두개 이상인 경우

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
import requests
from bs4 import BeautifulSoup
 
url = 'http://news.naver.com/main/read.nhn?mode=LSD&mid=shm&sid1=105&oid=469&aid=0000209556'
 
res = requests.get(url)
html = res.content
soup = BeautifulSoup(html, 'html5lib')
 
title = soup.find('h3', attrs = {'class' : ['font1''tts_head']})
print(title.get_text())
 
==========================<<실행결과>>==========================
 
단통법 보조금 상한제, 이달 조기 폐지 어려울 듯
cs

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import requests
from bs4 import BeautifulSoup
 
url = 'http://news.naver.com/main/read.nhn?mode=LSD&mid=shm&sid1=105&oid=469&aid=0000209556'
 
res = requests.get(url)
html = res.content
soup = BeautifulSoup(html, 'html5lib')
 
h3_list = soup.find_all('h3')
 
print(h3_list[2].get_text())
 
print('='*20)
 
for h3 in h3_list:
    print(h3.get_text())
 
==========================<<실행결과>>==========================
 
단통법 보조금 상한제, 이달 조기 폐지 어려울 듯
====================
날씨정보
주요뉴스
단통법 보조금 상한제, 이달 조기 폐지 어려울 듯
한국일보 관련뉴스언론사 페이지로 이동합니다.
cs

 

 

#2. CINE21 인물 랭킹 크롤링 (POST request를 해야하는 경우)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
import requests
from bs4 import BeautifulSoup
 
url = 'http://www.cine21.com/rank/person/content'
 
info_data = {}
info_data['section'= 'actor'
info_data['gender'= 'm'
info_data['page'= '1'
info_data['period_start'= '2016-12'
 
res = requests.post(url, data = info_data)
html = res.text
soup = BeautifulSoup(html, 'html5lib')
 
div_names = soup.find_all('div', attrs = {'class' : 'name'})
 
for name in div_names:
    print(name.a.get_text()) # 혹은 name.find('a')
 
==========================<<실행결과>>==========================
 
박서준(2편)
유해진(5편)
강하늘(4편)
박혁권(4편)
송강호(11편)
황정민(7편)
김남길(3편)
cs

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import re
 
for name in div_names:
    name = name.a.get_text()
    name = re.sub(r'\(.+\)''', name)
    print(name)
 
==========================<<실행결과>>==========================
 
박서준
유해진
강하늘
박혁권
송강호
황정민
김남길
 
cs

 

 

# 1page~11page까지 인물 랭킹 크롤링 해오기

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
import requests
from bs4 import BeautifulSoup
 
url = 'http://www.cine21.com/rank/person/content'
 
for page in range(111):
    info_data = {}
    info_data['section'= 'actor'
    info_data['gender'= 'm'
    info_data['page'= '1'
    info_data['period_start'= '2016-12'
    
    res = requests.post(url, data = info_data)
    soup = BeautifulSoup(res.text, 'html5lib')
    div_names = soup.find_all('div', class_ = 'name')
    
    for name in div_names:
        print(name.a.get_text())
        # 혹은 print(name.find('a'))
 
==========================<<실행결과>>==========================
 
박서준(2편)
유해진(5편)
강하늘(4편)
박혁권(4편)
송강호(11편)
황정민(7편)
김남길(3편)
박서준(2편)
유해진(5편)
강하늘(4편)
박혁권(4편)
송강호(11편)
황정민(7편)
김남길(3편)
박서준(2편)
유해진(5편)
강하늘(4편)
박혁권(4편)
송강호(11편)
황정민(7편)
김남길(3편)
박서준(2편)
유해진(5편)
강하늘(4편)
박혁권(4편)
송강호(11편)
황정민(7편)
김남길(3편)
박서준(2편)
유해진(5편)
강하늘(4편)
박혁권(4편)
송강호(11편)
황정민(7편)
김남길(3편)
박서준(2편)
유해진(5편)
강하늘(4편)
박혁권(4편)
송강호(11편)
황정민(7편)
김남길(3편)
박서준(2편)
유해진(5편)
강하늘(4편)
박혁권(4편)
송강호(11편)
황정민(7편)
김남길(3편)
박서준(2편)
유해진(5편)
강하늘(4편)
박혁권(4편)
송강호(11편)
황정민(7편)
김남길(3편)
박서준(2편)
유해진(5편)
강하늘(4편)
박혁권(4편)
송강호(11편)
황정민(7편)
김남길(3편)
박서준(2편)
유해진(5편)
강하늘(4편)
박혁권(4편)
송강호(11편)
황정민(7편)
김남길(3편)
cs

 

 

# CINE21 Image 크롤링

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
import requests
from bs4 import BeautifulSoup
 
url = 'http://www.cine21.com/rank/person/content'
 
info_data = {}
info_data['section'= 'actor'
info_data['period_start'= '2017-09'
info_data['gender'= 'all'
info_data['page'= '1'
 
res = requests.post(url, data = info_data)
html = res.content
soup = BeautifulSoup(html, 'html5lib')
 
img_names = soup.find_all('img')
for img in img_names:
    ## img가 src 안에 있기 때문에 get_text가 아닌 src를 가져온다
    print(img['src'])
 
==========================<<실행결과>>==========================
 
http://image.cine21.com/resize/cine21/person/2017/0913/10_38_54__59b88c2e80128[X145,145].jpg
http://image.cine21.com/resize/cine21/poster/2017/0831/11_12_46__59a7709e36596[X85,120].jpg
http://image.cine21.com/resize/cine21/poster/2017/0517/14_07_10__591bda7ed5a14[X85,120].jpg
http://image.cine21.com/resize/cine21/person/2017/0405/15_23_27__58e48d5f53b32[X145,145].jpg
http://image.cine21.com/resize/cine21/poster/2017/0831/11_12_46__59a7709e36596[X85,120].jpg
http://image.cine21.com/resize/cine21/person/2015/0129/10_06_28__54c9879475473[X145,145].jpg
http://image.cine21.com/resize/cine21/poster/2017/0831/11_12_46__59a7709e36596[X85,120].jpg
http://image.cine21.com/resize/cine21/person/2015/0519/14_57_22__555ad0c287ca6[X145,145].jpg
http://image.cine21.com/resize/cine21/poster/2017/0814/10_19_30__5990faa2eed7f[X85,120].jpg
http://image.cine21.com/resize/cine21/person/2017/0913/10_27_22__59b8897ac8a31[X145,145].jpg
http://image.cine21.com/resize/cine21/poster/2017/0911/09_47_35__59b5dd27c9432[X85,120].jpg
http://image.cine21.com/resize/cine21/poster/2014/0106/14_21_34__52ca3d5ec9513[X85,120].jpg
http://image.cine21.com/resize/cine21/person/2012/0906/13_41_08__50482964391dc[X145,145].jpg
http://image.cine21.com/resize/cine21/poster/2017/0831/11_12_46__59a7709e36596[X85,120].jpg
http://image.cine21.com/resize/cine21/poster/2014/1218/13_09_38__5492538233f21[X85,120].jpg
http://image.cine21.com/resize/cine21/poster/2013/1114/10_42_00__52842a68c7fda[X85,120].jpg
http://image.cine21.com/resize/cine21/poster/2015/0713/16_22_46__55a36746b87c9[X85,120].png
http://image.cine21.com/resize/cine21/poster/2005/0621/M0010160_ladyvengence_main_p1[X85,120].jpg
http://image.cine21.com/resize/cine21/person/2017/0913/10_28_25__59b889b95b3d8[X145,145].jpg
http://image.cine21.com/resize/cine21/poster/2017/0911/09_47_35__59b5dd27c9432[X85,120].jpg
http://image.cine21.com/resize/cine21/poster/2011/0207/M0010005_poster[X85,120].jpg
http://image.cine21.com/resize/cine21/poster/2013/0111/15_38_12__50efb3540fd9e[X85,120].jpg

 

cs

 

 

# 링크 크롤링 해오기

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
import requests
import re
from bs4 import BeautifulSoup
 
base_url = 'http://www.cine21.com'
cine_url = '{}/rank/person/content'.format(base_url)
 
for page in range(110):
    info_data = {}
    info_data['section'= 'actor'
    info_data['period_start'= '2017-09'
    info_data['gender'= 'all'
    info_data['page'= '1'
    
    res = requests.post(cine_url, data = info_data)
    soup = BeautifulSoup(res.text, 'html5lib')
    
    actors = soup.find_all('div', attrs = {'class' : 'name'})
    for name in actors:
        link = "{}{}".format(base_url, name.a['href'])
        print(link)
 
==========================<<실행결과>>==========================
 
http://www.cine21.com/db/person/info/?person_id=8846
http://www.cine21.com/db/person/info/?person_id=55827
http://www.cine21.com/db/person/info/?person_id=85703
http://www.cine21.com/db/person/info/?person_id=84745
http://www.cine21.com/db/person/info/?person_id=6126
http://www.cine21.com/db/person/info/?person_id=19538
http://www.cine21.com/db/person/info/?person_id=70688
http://www.cine21.com/db/person/info/?person_id=8846
http://www.cine21.com/db/person/info/?person_id=55827
http://www.cine21.com/db/person/info/?person_id=85703
http://www.cine21.com/db/person/info/?person_id=84745
http://www.cine21.com/db/person/info/?person_id=6126
http://www.cine21.com/db/person/info/?person_id=19538
http://www.cine21.com/db/person/info/?person_id=70688
http://www.cine21.com/db/person/info/?person_id=8846
http://www.cine21.com/db/person/info/?person_id=55827
http://www.cine21.com/db/person/info/?person_id=85703
http://www.cine21.com/db/person/info/?person_id=84745
http://www.cine21.com/db/person/info/?person_id=6126
http://www.cine21.com/db/person/info/?person_id=19538
http://www.cine21.com/db/person/info/?person_id=70688
http://www.cine21.com/db/person/info/?person_id=8846
http://www.cine21.com/db/person/info/?person_id=55827
http://www.cine21.com/db/person/info/?person_id=85703
http://www.cine21.com/db/person/info/?person_id=84745
http://www.cine21.com/db/person/info/?person_id=6126
http://www.cine21.com/db/person/info/?person_id=19538
http://www.cine21.com/db/person/info/?person_id=70688
http://www.cine21.com/db/person/info/?person_id=8846
http://www.cine21.com/db/person/info/?person_id=55827
http://www.cine21.com/db/person/info/?person_id=85703
http://www.cine21.com/db/person/info/?person_id=84745
http://www.cine21.com/db/person/info/?person_id=6126
http://www.cine21.com/db/person/info/?person_id=19538
http://www.cine21.com/db/person/info/?person_id=70688
http://www.cine21.com/db/person/info/?person_id=8846
http://www.cine21.com/db/person/info/?person_id=55827
http://www.cine21.com/db/person/info/?person_id=85703
http://www.cine21.com/db/person/info/?person_id=84745
http://www.cine21.com/db/person/info/?person_id=6126
http://www.cine21.com/db/person/info/?person_id=19538
http://www.cine21.com/db/person/info/?person_id=70688
http://www.cine21.com/db/person/info/?person_id=8846
http://www.cine21.com/db/person/info/?person_id=55827
http://www.cine21.com/db/person/info/?person_id=85703
http://www.cine21.com/db/person/info/?person_id=84745
http://www.cine21.com/db/person/info/?person_id=6126
http://www.cine21.com/db/person/info/?person_id=19538
http://www.cine21.com/db/person/info/?person_id=70688
http://www.cine21.com/db/person/info/?person_id=8846
http://www.cine21.com/db/person/info/?person_id=55827
http://www.cine21.com/db/person/info/?person_id=85703
http://www.cine21.com/db/person/info/?person_id=84745
http://www.cine21.com/db/person/info/?person_id=6126
http://www.cine21.com/db/person/info/?person_id=19538
http://www.cine21.com/db/person/info/?person_id=70688
http://www.cine21.com/db/person/info/?person_id=8846
http://www.cine21.com/db/person/info/?person_id=55827
http://www.cine21.com/db/person/info/?person_id=85703
http://www.cine21.com/db/person/info/?person_id=84745
http://www.cine21.com/db/person/info/?person_id=6126
http://www.cine21.com/db/person/info/?person_id=19538
http://www.cine21.com/db/person/info/?person_id=70688
cs

 

 

#3. CSS Selector를 이용한 크롤링

- CSS 선택 문법을 이용하여 태그 검색

- select 함수 사용

- [참고] https://saucelabs.com/resources/articles/selenium-tips-css-selectors

 

 

예제 #1. select 함수는 List 형태로 전체를 반환한다.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
import requests
from bs4 import BeautifulSoup
import json
 
res = requests.get('http://v.media.daum.net/v/20171003150106396')
html = res.content
soup = BeautifulSoup(html, 'html5lib')
 
# 태그검색
title_find = soup.find('title')
print(title_find.get_text())
 
# select 함수는 List 형태로 전체를 반환한다.
title_select = soup.select('title')[0]
print(title_select.get_text())
 
=====================<<실행결과>>=====================
 
인류가 내뿜는 이산화탄소, '지구의 파멸' 이끈다 | Daum 뉴스
인류가 내뿜는 이산화탄소, '지구의 파멸' 이끈다 | Daum 뉴스
 
cs

 

 

예제 #2. 띄어쓰기가 있다면 하위 태그를 검색한다. 이때 바로 직계의 자식이 아니여도 관계 없음

 

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import requests
from bs4 import BeautifulSoup
import json
 
res = requests.get('http://v.media.daum.net/v/20171003150106396')
html = res.content
soup = BeautifulSoup(html, 'html5lib')
 
title = soup.select('html head title')[0]
print(title.get_text())
 
title = soup.select('html title')[0]
print(title.get_text())
 
================<<실행결과>>================
 
인류가 내뿜는 이산화탄소, '지구의 파멸' 이끈다 | Daum 뉴스
인류가 내뿜는 이산화탄소, '지구의 파멸' 이끈다 | Daum 뉴스
 
cs

 

 

예제 #3. '>'를 사용하는 경우 바로 아래의 자식만 검색

1
2
title = soup.select('head > title')[0]
print(title.get_text())
cs

 

 

예제 #4. '.'은 태그의 클래스를 검색, '#'은 id를 검색

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
import requests
from bs4 import BeautifulSoup
import json
 
res = requests.get('http://v.media.daum.net/v/20171003150106396')
soup = BeautifulSoup(res.content, 'html5lib')
 
# . == only
# 오로지 article_view가 속성인 클래스만 가져와
article_view = soup.select('.article_view')[0]
print(type(article_view), len(article_view), '\n')
 
for p in article_view.find_all('p'):
    print(p.get_text())
 
================<<실행결과>>================
 
<class 'bs4.element.Tag'> 3 
 
[서울신문 나우뉴스]
 
-2100년에 ‘제6의 대멸종’ 시작될지도
2100년까지 인류가 배출할 이산화탄소의 총량이 지구에 ‘제6의 대멸종’ 방아쇠를 당길지도 모른다는 새 연구 결과가 발표됐다.
지난 1세기 남짓 동안 인류가 지구 대기 속으로 배출해낸 이산화탄소 양의 수준이 이윽고 지구를 ‘대파국의 문턱’에 다다르게 했으며, 이 문턱을 넘어서면 지구 환경의 불안정과 대량멸종은 피할 수 없게 될 것이라고 새 연구는 예측하고 있다.
비록 대량멸종이 즉각적으로 일어나지 않는다 해도 앞으로 1만 년에 걸쳐 대량멸종이 진행될 것으로 본다고 논문 공동저자 대니얼 로트먼 매사추세츠 공과대학(MIT) 지구물리학 교수가 말했다.
지구 역사 45억 년 동안 지구상에는 생명의 풍성한 향연이 이루어졌다. 지난 5억 년 동안 이 생명의 향연은 적어도 다섯 차례 대량멸종으로 쑥대밭이 되었다. 수많은 종들이 하릴없이 사라진 대량멸종 가운데도 페름기 대멸종이 가장 혹독했다. 이 대멸종에서 지구의 바다에서 95%의 생명이 멸절했고, 육지생물은 70%가 사라졌다. 이 모든 멸종은 하나의 유사점을 공유한다.
로트먼은 “이 다섯 차례의 대량멸종이 있을 때마다 지구적인 탄소 사이클의 붕괴가 선행되었다”고 밝혔다. 이산화탄소와 생영체의 죽음은 직접적인 상관관계를 가지고 있다. 대기 중의 과도한 이산화탄소는 기온을 상승시켜, 마침내 생명이 살 수 없는 기온이 되게 하며, 그 뒤 화산 폭발을 야기해 다시 지구를 식히는 순환이 이루어지는 것이다.
예컨대 2억 5000만 년 전 페름기의 끝에 바다의 이산화탄소 수치가 치솟았던 사실을 바다 암석이 보여주고 있다. 이산화탄소는 지구 생명의 대량멸종과 긴밀한 관계를 맺고 있다. 지구 대기 속과 바다의 이산화탄소 수치는 급격한 환경변화의 동인이며, 그것이 이윽고 대량멸종으로 이어지는 것이다. 그러나 ‘탄소 폭주’ 한 가지가 대량멸종을 가져오는 것은 아니다.
9월 20일자 발행의 ‘사이언스 어드밴시스’ 지에 발표된 새 연구는 대량멸종의 원인으로 두 요소가 상정되었는데, 이산화탄소 증가율과 그 시기 이산화탄소의 총량이 바로 그것이다.
이 두 가지 수치를 계산하기 위해 로트먼은 지난 5억 4000만 년 기간에 속하는 31개 지질시대의 바위에 포함되어 있는 탄소 동위원소(중성자 수가 다른 탄소원자)를 측정했다.
그 데이터에서 로트먼과 그의 동료들은 지질학적 기록에 나타난 대량멸종과 관련된 것으로 보이는 탄소 양의 변화 비율과 그 총량을 결정할 수 있었다. 이어서 그들은 현재에 이르는 탄소의 변화 상황을 산정했다. 이에 따라 현재 인류는 가공할 정도의 비율로 이산화탄소를 대기중으로 배출하고 있다는 사실이 드러났다.
비록 상당한 불확실성은 있지만, 이번 세기 말까지 탄소가 추가적으로 310기가톤(1기가는 10억)이 바다에 더 축적되면 대량멸종의 방아쇠를 당기는 데 부족함이 없다는 계산서를 뽑아냈다고 로트먼은 밝혔다. 그 다음은 어떻게 될까?
로트먼은 “그 다음은 대량멸종이 뒤따를 것"이라면서 “그러나 급격한 대량멸종이 아니라 1만 년에 걸쳐 천천히 진행되는 멸종시대에 접어들 것”이라고 설명했다.
리 검프 펜실베니아 주립대 교수는 “만약 인류가 이산화탄소 배출을 극적으로 감소시키지 않는다면 페름기의 대멸종 같은 지구 대파국은 피할 수 없을 것”이라고 밝혔다.
이광식 칼럼니스트 joand999@naver.com 
cs

 

## div 태그 중 class가 article_view 태그 탐색

1
2
3
4
body = soup.select('div.article_view')[0]
 
for p in body.find_all('p'):
    print(p.get_text())
cs

 

## div 태그 중 id가 container인 태그 탐색

1
2
container = soup.select('#harmonyContainer')[0]
print(container.get_text())
cs

 

## div태그 중 id 가 mArticle인 태그의 하위 태그 중 id가 harmonyContainer인 것 탐색

 

 

1
2
title = soup.select('div#mArticle  div#harmonyContainer')[0]
print(title.get_text())
cs

 

 

예제 #5. 링크 가져오기

 

 

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
import re
import requests
from bs4 import BeautifulSoup
 
res = requests.get('http://v.media.daum.net/v/20171003150106396')
content = res.content
soup = BeautifulSoup(content, 'html5lib')
 
# a 태그이면서 href 속성을 갖는 경우 탐색
links = soup.select('a[href]')
 
for link in links:
    if re.search(r'http://\w+', link['href']):
        print(link['href'])
 
================<<실행결과>>================
 
http://www.daum.net/
http://media.daum.net/
http://media.daum.net/entertain/
http://sports.media.daum.net/sports/
http://media.daum.net/lab/keyword
http://media.daum.net/
http://media.daum.net/society/
http://media.daum.net/politics/
http://media.daum.net/economic/
http://media.daum.net/foreign/
http://media.daum.net/culture/
http://media.daum.net/digital/
http://media.daum.net/ranking/popular/
http://media.daum.net/series/
http://media.daum.net/photo/
http://media.daum.net/tv/
http://media.daum.net/1boon/
http://media.daum.net/storyfunding/
http://media.daum.net/exhibition/
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=서울%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=수원%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=인천%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=대구%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=대전%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=광주%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=부산%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=울산%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=울릉/독도%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=춘천%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=강릉%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=백령%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=청주%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=전주%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=목포%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=여수%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=제주%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=안동%20날씨
http://search.daum.net/search?nil_suggest=btn&w=tot&DA=SBC&q=창원%20날씨
http://www.seoul.co.kr/
http://nownews.seoul.co.kr/news/newsView.php?id=20170714601006&wlog_tag3=daum_relation
http://nownews.seoul.co.kr/news/newsView.php?id=20170712601018&wlog_tag3=daum_relation
http://nownews.seoul.co.kr/news/newsView.php?id=20170830601013&wlog_tag3=daum_relation
http://nownews.seoul.co.kr/news/newsView.php?id=20170824601003&wlog_tag3=daum_relation
http://nownews.seoul.co.kr/news/newsView.php?id=20170809601015&wlog_tag3=daum_relation
http://nownews.seoul.co.kr/news/newsView.php?id=20170711601008&wlog_tag3=daum_relation
http://nownews.seoul.co.kr/news/newsView.php?id=20170927601023&wlog_tag3=daum_relation
http://nownews.seoul.co.kr/news/newsView.php?id=20170905601014&wlog_tag3=daum_relation
http://nownews.seoul.co.kr/news/newsView.php?id=20170823601001&wlog_tag3=daum_relation
http://nownews.seoul.co.kr/news/newsView.php?id=20170515601007&wlog_tag3=daum_relation
http://media.daum.net/ranking/popular?include=society,politics,culture,economic,foreign,digital
http://media.daum.net/photo-viewer?cid=182938#20171003162602434
http://media.daum.net/issue/461915
http://media.daum.net/issue/461915
http://media.daum.net/issue/470288
http://media.daum.net/issue/470288
http://media.daum.net/issue/460574
http://media.daum.net/issue/460574
http://media.daum.net/
http://media.daum.net/society/
http://media.daum.net/politics/
http://media.daum.net/economic/
http://media.daum.net/foreign/
http://media.daum.net/culture/
http://media.daum.net/digital/
http://media.daum.net/photo/
http://media.daum.net/tv/
http://media.daum.net/issue/
http://media.daum.net/lab/
http://media.daum.net/lab/keyword
http://media.daum.net/lab/quote/
http://media.daum.net/cp/
http://media.daum.net/newsbox/
http://media.daum.net/breakingnews/
http://media.daum.net/ranking/popular/
http://media.daum.net/series/
http://media.daum.net/1boon/
http://media.daum.net/storyfunding/
http://media.daum.net/info/intro.html
http://media.daum.net/info/notice/
http://media.daum.net/info/bbsrule.html
http://policy.daum.net/info/info
http://biz.daum.net/
http://cs.daum.net/faq/63.html
http://media.daum.net/info/newscenter24.html
http://media.daum.net/info/edit.html
http://media.daum.net/info/correct/
http://www.kakaocorp.com/
cs

 

 

 

반응형