I have a piece of code that has been working for years until today. After debugging I realized that last token it not collected correctly. I think is because of his length (more than 10k chars).
Code:
StringTokenizer tokens = new StringTokenizer(myString,"&&&&&&&");
(...)
String s=tokens.nextToken();
//Do something with s
s=tokens.nextToken();
//Do something with s
s=tokens.nextToken();
//Do something with s
//Now it's time of last and biggest token
s=tokens.nextToken(); // --> s does not contain entire string
You are using the StringTokenizer
in the wrong way. Your tokenizer does not split at "&&&&&&&"
as one would expect, but at '&'
, since it just requires one character from your delimiters String to delimit tokens. It then discards empty tokens, which is why you got the expected result anyway. For example:
StringTokenizer tokens = new StringTokenizer("a&&b&&c", "&&&");
while (tokens.hasMoreTokens()) {
System.out.println(tokens.nextToken());
}
This prints:
a
b
c
So my suspicion is there is an &
somewhere within you 10k token. If that could be the case, I suggest that msaint's suggestion, using String.split()
, is the way to go if you can afford modifying your old code.
API seems to have no limitation in terms of length. I tried to reproduce your case and couldn't succeed. I was able to get 7 Mega chars from stringtokenizer. You can check your string first, then try split as stringtokenizer is a legacy class.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.