救命呀!!我要在Linux 开发一个以文件方式读取*.html文件并检索其中的关键字的全文搜索程序,可以加到500分(200分)

  • 主题发起人 主题发起人 wangxiaoling
  • 开始时间 开始时间
W

wangxiaoling

Unregistered / Unconfirmed
GUEST, unregistred user!
我门不能用数据库的全文检索,所以只能想这个办法。
可以加到500分
 
用Kylix2可以实现。
 
如果,你们能用 Java 的话:
http://www.delphibbs.com/delphibbs/dispq.asp?lid=755450
 
多谢yyson,
我就是用的java
 
你是做站内全文搜索吗??
 
是站内全文搜索,所有HTML
 
用 Lucene 吧,本站的全文检索就是用的这个。
http://jakarta.apache.org/lucene
http://www.delphibbs.com/delphibbs/dispq.asp?lid=755450
 
孙老师:
久仰大名,您好象以前也回答过我的帖子,我也算您的学生了,我已经给你发了邮件,
希望能得到您的帮助,我现在做的项目是一个大部委的电子政务系统,需要用到全文检索,
但时间紧迫,1月1号就要看原型,所以从头学Lucene已经来不及了,能否给我您的原代码一看,
非常感谢。
 
lucene本来就是opensource的
 
救你一把,记得谢我。
package org.apache.lucene.analysis.cn;
import java.io.Reader;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.TokenStream;
/**
* Title: ChineseAnalyzer
* Description:
* Subclass of org.apache.lucene.analysis.Analyzer
* build from a ChineseTokenizer, filtered with ChineseFilter.
* Copyright: Copyright (c) 2001
* Company:
* @author Yiyi Sun
* @version 1.0
*
*/
public class ChineseAnalyzer extends Analyzer {
public ChineseAnalyzer() {
}
/**
* Creates a TokenStream which tokenizes all the text in the provided Reader.
*
* @return A TokenStream build from a ChineseTokenizer filtered with ChineseFilter.
*/
public final TokenStream tokenStream(String fieldName, Reader reader) {
TokenStream result = new ChineseTokenizer(reader);
result = new ChineseFilter(result);
return result;
}
}
package org.apache.lucene.analysis.cn;
import java.util.Hashtable;
import org.apache.lucene.analysis.*;
/**
* Title: ChineseFilter
* Description: Filter with a stop word table
* Rule: No digital is allowed.
* English word/token should larger than 1 character.
* One Chinese character as one Chinese word.
* TOdo
:
* 1. Add Chinese stop words, such as /ue400
* 2. Dictionary based Chinese word extraction
* 3. Intelligent Chinese word extraction
*
* Copyright: Copyright (c) 2001
* Company:
* @author Yiyi Sun
* @version 1.0
*
*/
public final class ChineseFilter extends TokenFilter {
// Only English now, Chinese to be added later.
public static final String[] STOP_WORDS = {
"and", "are", "as", "at", "be", "but", "by",
"for", "if", "in", "into", "is", "it",
"no", "not", "of", "on", "or", "such",
"that", "the", "their", "then
", "there", "these",
"they", "this", "to", "was", "will", "with"
};
private Hashtable stopTable;
public ChineseFilter(TokenStream in) {
input = in;
stopTable = new Hashtable(STOP_WORDS.length);
for (int i = 0;
i < STOP_WORDS.length;
i++)
stopTable.put(STOP_WORDS, STOP_WORDS);
}
public final Token next() throws java.io.IOException {
for (Token token = input.next();
token != null;
token = input.next()) {
String text = token.termText();
if (stopTable.get(text) == null) {
switch (Character.getType(text.charAt(0))) {
case Character.LOWERCASE_LETTER:
case Character.UPPERCASE_LETTER:
// English word/token should larger than 1 character.
if (text.length()>1) {
return token;
}
break;
case Character.OTHER_LETTER:
// One Chinese character as one Chinese word.
// Chinese word extraction to be added later here.
return token;
}
}
}
return null;
}
}
package org.apache.lucene.analysis.cn;
import java.io.Reader;
import org.apache.lucene.analysis.*;

/**
* Title: ChineseTokenizer
* Description: Extract tokens from the Stream using Character.getType()
* Rule: A Chinese character as a single token
* Copyright: Copyright (c) 2001
* Company:
* @author Yiyi Sun
* @version 1.0
*
*/
public final class ChineseTokenizer extends Tokenizer {
public ChineseTokenizer(Reader in) {
input = in;
}
private int offset = 0, bufferIndex=0, dataLen=0;
private final static int MAX_WORD_LEN = 255;
private final static int IO_BUFFER_SIZE = 1024;
private final char[] buffer = new char[MAX_WORD_LEN];
private final char[] ioBuffer = new char[IO_BUFFER_SIZE];
private int length;
private int start;
private final void push(char c) {
if (length == 0) start = offset-1;
// start of token
buffer[length++] = Character.toLowerCase(c);
// buffer it
}
private final Token flush() {
if (length>0) {
//System.out.println(new String(buffer, 0, length));
return new Token(new String(buffer, 0, length), start, start+length);
}
else
return null;
}
public final Token next() throws java.io.IOException {
length = 0;
start = offset;
while (true) {
final char c;
offset++;
if (bufferIndex >= dataLen) {
dataLen = input.read(ioBuffer);
bufferIndex = 0;
};
if (dataLen == -1) return flush();
else
c = (char) ioBuffer[bufferIndex++];
switch(Character.getType(c)) {
case Character.DECIMAL_DIGIT_NUMBER:
case Character.LOWERCASE_LETTER:
case Character.UPPERCASE_LETTER:
push(c);
if (length == MAX_WORD_LEN) return flush();
break;
case Character.OTHER_LETTER:
if (length>0) {
bufferIndex--;
return flush();
}
push(c);
return flush();
default:
if (length>0) return flush();
break;
}
}
}
}
 
建立索引:
IndexWriter writer = new IndexWriter(INDEX_PATH, new ChineseAnalyzer(), true/false);
Documentdo
c = newdo
cument();
doc.add(Field.Text("content", content);
doc.add(Field.Keyword("id", id);
....
writer.optimize();
writer.close();
全文检索:
Searcher searcher = new IndexSearcher(indexPath);
ChineseAnalyzer anlalyzer = new ChineseAnalyzer();
Query query = QueryParser.parse(queryString, "content", anlalyzer);
Hits hits = searcher.search(query);
//hits 里面就是检索到的内容
....
for (int ii=0;
ii<length;
ii++) {
do
cumentdo
c = hits.doc(ii);
String id =do
c.get("id");
String content =do
c.get("content");
......
}
 
yysun
万分感谢,我正在研究中。。
 
接受答案了.
 
孙老师,我无法 compile ChineseAnalyzer.java
请求您的帮助!!
错误信息如下:
>javac ChineseAnalyzer.java
ChineseAnalyzer.java:33: cannot resolve symbol
symbol : class ChineseTokenizer
location: class org.apache.lucene.analysis.ch.ChineseAnalyzer
TokenStream result = new ChineseTokenizer( reader );
^
ChineseAnalyzer.java:34: cannot resolve symbol
symbol : class ChineseFilter
location: class org.apache.lucene.analysis.ch.ChineseAnalyzer
result = new ChineseFilter( result );
^
2 errors

而 ChineseTokenizer 以及 ChineseFilter 都能正常编译
 
怎么弄出个 ch 来了? org.apache.lucene.analysis.[red]ch[/red].ChineseAnalyzer
应该是 cn !
这三个文件都该放在 org/apache/lucene/analysis/cn 目录下。
 
我是照您发到lucene网站的那个包做的,package name都是以ch结尾,子目录也应该是ch,对不对??
不过我刚才都改成 cn 了,还是不行,一样的错误。
 
我发到 lucene 有两个次,您应该下载第二次的 cn 包。
实际上,您拷贝本贴上面三个文件就行了,这是肯定可以用的。
文件放在 org/apache/lucene/analysis/cn 目录下
每个文件的开头都是 package org.apache.lucene.analysis.cn;
 
我睁大眼睛再看了一遍,确保所有的都一摸一样了/
会不会是我的java环境变量配置有问题?
analysis/cn>javac -verbose -deprecation ChineseAnalyzer.java
[parsing started ChineseAnalyzer.java]
[parsing completed 280ms]
[loading C:/j2sdk1.4.0/jre/lib/rt.jar(java/io/Reader.class)]
[loading C:/java/lucene.jar(org/apache/lucene/analysis/Analyzer.class)]
[loading C:/java/lucene.jar(org/apache/lucene/analysis/TokenStream.class)]
[loading C:/j2sdk1.4.0/jre/lib/rt.jar(java/lang/Object.class)]
[loading C:/j2sdk1.4.0/jre/lib/rt.jar(java/lang/String.class)]
[checking org.apache.lucene.analysis.cn.ChineseAnalyzer]
ChineseAnalyzer.java:33: cannot resolve symbol
symbol : class ChineseTokenizer
location: class org.apache.lucene.analysis.cn.ChineseAnalyzer
TokenStream result = new ChineseTokenizer( reader );
^
ChineseAnalyzer.java:34: cannot resolve symbol
symbol : class ChineseFilter
location: class org.apache.lucene.analysis.cn.ChineseAnalyzer
result = new ChineseFilter( result );
^
[total 8422ms]
2 errors
 
天哪,是您不知道如何编译 java package,不要单个文件去编译,所有文件一起编译就行了:
cd <dir>/org/apache/lucene/analysis/cn
...analysis/cn>javac [red]*[/red].java
建议您装个 eclipse 吧。 http://eclipse.org/downloads
 
搞定,多谢孙老师了!
bow
 

Similar threads

回复
0
查看
1K
不得闲
S
回复
0
查看
3K
SUNSTONE的Delphi笔记
S
S
回复
0
查看
2K
SUNSTONE的Delphi笔记
S
后退
顶部