求一段PHP代码 ,判断百度蜘蛛的来路的

求一段PHP代码 ,判断百度蜘蛛的来路的,第1张

普通用户与搜索引擎蜘蛛爬行的区别在于发送的user agent

百度蜘蛛名字包含Baiduspider, 而google的则是Googlebot, 这样我们可以通过判断发送的user agent来决定要不要取消普通用户的访问,编写函数如下:

function isAllowAccess($directForbidden = false,$url) { 

$allowed = array('/baiduspider/i', '/googlebot/i') 

$user_agent = $_SERVER['HTTP_USER_AGENT'] 

$valid = false 

foreach ($allowed as $pattern) { 

if (preg_match($pattern, $user_agent)) { 

$valid = true 

break 

if (!$valid && $directForbidden) { 

header('location:'.$url)

}  

return $valid 

}

望采纳 Thx

推荐一个国外知名度颇高的搜索引擎,含有网页蜘蛛程序,以前好象有人想要这方面的资料,现在有了,大家可以研究下源码。

官方网站:

http://phpdig.toiletoine.net/

演示

http://phpdig.toiletoine.net/sea ... te=100&option=start

中文版本演示,我以前提供过(1.62版本的汉化),2003年11月换空间的时候没备份,没了。找下载了的人看看有没有。

下载:

这是最近(2003年12月)更新的版本的下载(1.65 En):

http://www.phpdig.net/navigation.php?action=download

演示:

http://www.phpdig.net/navigation.php?action=demo

主要功能:

类似google、百度的搜索引擎,php+mysql。

PhpDig is a http spider/search engine written in Php with a MySql database in backend.

HTTP Spidering : PhpDig follows links as it was any web browser within a web server, to build the pages list to index. Links can be in AreaMap, or frames. PhpDig supports relocations. Any syntax of HREF attribute is followed by Phpdig.

PhpDig don't go out the root site you define for the indexing. Spidering depth is choosen by user.

All html content is listed, both static and dynamic pages. PhpDig searches the Mime-Type of the document, or tests existence of an tag at the beginning of it.

支持全文搜索

Full Text indexing : PhpDig indexes all words of a document, excepting small words (less than 3 letters) an common words, those are definded in a text file.

Lone numbers are not inded, but those included in words. Underscores make part of a word.

Occurences of a word in a document is saved. Words in the title can have a more important weight in ranking results.

支持多种格式文件的索引,如pdf

File types wich can be indexed : PhpDig indexes HTML and text files by itself.

PhpDig could index PDF, MS-Word and MS-Excel files if you install external binaries on the spidering machines to this purpose.

To demonstrate the feature, you can search into Hamlet (tragedy, William Shakespeare) in MS-Word format, and L'Avare (comedy, Molière) in Pdf format.

支持robots

Other features : PhpDig Tries to read a robots.txt file at the server root. It searches meta robots tags too.

The Last-Modified header value is stored in the database to avoid redundant indexing. Also the meta revisit-after tag.

可针对特定网站进行全文索引,蜘蛛可1-9个层自动获取全部url

其中的蜘蛛程序写得十分好,有兴趣的朋友推荐研究下。

希望对你有用!


欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/yw/8043954.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2023-04-12
下一篇 2023-04-12

发表评论

登录后才能评论

评论列表(0条)

保存