Replace Words With Links

Last modified by Vincent Massol on 2021/03/18 11:28

cogReplaces words that are matching page names by links in all documents
TypeSnippet
Category
Developed by

xwiki:XWiki.cjdelisle

Rating
0 Votes
LicenseGNU Lesser General Public License 2.1

Table of contents

Description

0.01 written against Enterprise-1.8-milestone2

Since xwiki character excaper has wrecked havoc on the script, it's best you have to download the attachment.

to get it running, download the attachment, open in a text editor, copy and paste into a page in your xwiki site (I would do this on a local computer, if the db is big, it will eat your cpu for lunch, anyway it generates scripts to commit the changes so why not?) Change the array linkFrom to a list of spaces filled with pages which are wordy and need more links, and change linkTo to spaces which are filled with pages which need more traffic. update and view the page, click "run the script" (you have to have admin privilege) twiddle your thumbs and watch your cpu meter emoticon_smile

arrays linkTo and linkFrom are the lists of spaces which should be examined for possible links. They can include the same spaces but pages in those spaces might have links pointing to themselves. linkFrom is the spaces which contain the pages which will be looked at for words which are the names of pages contained in one of the spaces in linkTo.

everything judged as lowercase: \ If there is a page called CreAtive_Spelling \ CREATIVE SPELLING becomes [CREATIVE SPELLING>space.CreAtive_Spelling] \ creative spelling becomes [creative spelling>space.CreAtive_Spelling] \ CreAtive_Spelling becomes [space.CreAtive_Spelling] 

Underscore to space: \ if there is a page called Bad_Programmer \ instances of "Bad Programmer" will be converted to [space.Bad_Programmer].

Biggest match selection: \ if there is a page called: \ SpaghettiCode_Writer \ instances of "SpaghettiCode Writer" are turned into links to SpaghettiCode_Writer even if pages exists called Spaghetti, Code, and Writer.

Hope you enjoy it, when you copy it you accept responsibility for what your copies do.

<%
//replaceWordsWithLinks.groovy
//by Caleb James DeLisle calebdelisle{at]lavabit,com
//looks through wiki db and finds words in pages which are the same as the names of pages
//replaces the words with links to the pages.
//generates 1 or more pages of groovy script to commit changes
//if commit page exceeds 65,535 characters, it puts the rest on a new page.
//autogenerated pages to be found in temp.replaceWordsWithLinks_(RANDOM)_(PAGENUMBER)
//also generates script to rollback all changes.
//requires LOTS of power to run, best to run on a local computer, and copy the commit pages to server.
//commit pages contain SHA-1 sums of content of all pages to be changed,
//any page which has been updated since examination by this script will fail checksum and be skipped by commit script

linkTo = ["Thing","Plant"]//spaces to to look for pages to link to
linkFrom = ["Thing","Plant","Connection","Main"]//spaces to look for pages with words to change to links
noLinkFrom = ["RssFeeds"]//pages whithin the above spaces to avoid changing words to links
okayStart = [' ',')','(','>','\n','\t','*','~']//not all words start with a space
okayEnd = [' ',')','(','<',',','.',';',':','?','!','\n','\t','*','~']//or end with a space
skipBetween = ["[","]","<a","</","<"+"%","%"+">","<script","</script>","#"+"*","*"+"#"]//any matches found between each pair of characters is skipped.
skipAfter = ["#"]//anything found on the same line after thiese strings is skipped.
int shortestMatch = 3; //if you have pages called "is" and "it" and "to", this prevents it turning (almost) everything to links

import java.lang.*;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;

if(request.get("go")!=null && skipBetween.size()%2==0){
   if(!xwiki.hasAdminRights()){println("You have to have admin permission to run this script. :(");return;}
   int counter = 0, counter2 = 0, counter3 = 0, theIndex = 0, nLineIndex = 0, temp = 0, temp2 = 0;
   int largestLocation = 0, location = 0, endLocation = 0, thisEndLocation = 0, changesDropped = 0, totalFound = 0, pageNum = 1, rbPageNum = 1;
    String id = UUID.randomUUID().toString().substring(0,8);
    String fullName = "", theMatch = "";
    String applyChange = "", applyChanges = "", rollbackChange = "", rollbackChanges = "";
    ArrayList<String> names = new ArrayList<String>();
    ArrayList<String> namesCaps = new ArrayList<String>();
    ArrayList<String> change;
    ArrayList<ArrayList<String>> changes = new ArrayList<ArrayList<String>>();
    HashMap<Integer,Integer> indexByLocation = new HashMap<Integer,Integer>();
    MessageDigest md = MessageDigest.getInstance("SHA-1");

    String makeChangesHeader = "<"+"% //automatically generated groovy script\nif(!xwiki.hasAdminRights()){println(\"You have to have admin permission to run this script. :(\");return;}\nString changes = \"";
    String makeChangesFooter = "\";\nimport java.lang.*;\n"+
   "import java.security.MessageDigest;\n"+
   "import java.security.NoSuchAlgorithmException;\n"+
   "import java.math.BigInteger;\n"+
   "try{\n"+
   "    MessageDigest md = MessageDigest.getInstance(\"SHA-1\");\n"+
   "    int index = 1, nextIndex = 0, fromIndex = 0, toIndex = 0, success = 0, failed = 0;\n"+
   "    String content = \"\", contentSum = \"\", docName = \"\";\n"+
   "    while(index < changes.length()){\n"+
   "        nextIndex = changes.indexOf(\" \",index);\n"+
   "        docName = changes.substring(index,nextIndex);\n"+
   "        document = xwiki.getDocument(docName);\n"+
   "        content = document.getContent();\n"+
   "        contentSum = new BigInteger(1,md.digest(content.getBytes())).toString(36);\n"+
   "        index = nextIndex + 1;nextIndex = changes.indexOf(\" \",index);\n"+
   "        println(\"\\n\\n\"+docName);\n"+
   "        if(!contentSum.equals(changes.substring(index,nextIndex))){\n"+
   "            println(\"*  Checksum failed, skipping this document.\");\n"+
   "            failed++;\n"+
   "            index = changes.indexOf(\" \",nextIndex+1)+1;\n"+
   "            continue;\n"+
   "        }\n"+
   "        success++;\n"+
   "        index = nextIndex + 1;nextIndex = changes.indexOf(\",\",index);\n"+
   "        while(index < changes.length() && !changes.charAt(index).toString().equals(\" \")){\n"+
   "            fromIndex = Integer.parseInt(changes.substring(index,nextIndex));\n"+
   "            index = nextIndex + 1;nextIndex = changes.indexOf(\"[\",index);\n"+
   "            toIndex = Integer.parseInt(changes.substring(index,nextIndex));\n"+
   "            index = nextIndex;nextIndex = changes.indexOf(\"]\",index)+1;\n"+
   "            if(request.get(\"commit\")!=null){}else{\n"+
   "                if(request.get(\"verbose\")!=null){println(\"*  \"+changes.substring(index,nextIndex)+\" @ \"+fromIndex+\" to \"+toIndex+\" \\\"\"+content.substring(Math.min(0,fromIndex-20),fromIndex).replace(\"\\n\",\"\\\\n\").replace(\"<\",\"\").replace(\"{/pre\"+\"}\",\"\")+\n"+
   "                                                           changes.substring(index,nextIndex)+content.substring(toIndex,toIndex+Math.min(20,content.substring(toIndex).length())).replace(\"\\n\",\"\\\\n\").replace(\"<\",\"\").replace(\"{/pre\"+\"}\",\"\")+\"\\\"&#34;);n"+
   "                }else{println(&#34;* &#34;+changes.substring(index,nextIndex)+&#34; @ \"+fromIndex+\" to \"+toIndex+\"  Will succeed&#34;);n"+
   "                }n"+
   "            }n"+
   "            content = content.substring(0,fromIndex)+changes.substring(index,nextIndex)+content.substring(toIndex);n"+
   "            index = nextIndex;nextIndex = changes.indexOf(&#34;,&#34;,index);n"+
   "        }n"+
   "        index++;n"+
   "        if(request.get(&#34;commit&#34;)!=null){n"+
   "            document.doc.setContent(content);n"+
   "            document.save();n"+
   "            println(&#34;*  Commit Successful&#34;);n"+
   "        }n"+
   "    }n"+
   "    if(request.get(&#34;commit&#34;)!=null){n"+
   "        println(&#34;<br/>n<br/>nSuccess. Updated &#34;+success+&#34; pages&#34;);n"+
   "        if(failed > 0){println(&#34;<br/>n<br/>nHashes failed for &#34;+failed+&#34; pages, those pages skipped.&#34;);}n"+
   "        document = context.context.getDoc();n"+
   "        docName = document.getFullName();n"+
   "        index = Integer.parseInt(docName.substring(docName.lastIndexOf(&#34;_&#34;)+1))+1;n"+
   "        docName = docName.substring(0,docName.lastIndexOf(&#34;_&#34;))+&#34;_&#34;+index;n"+
   "        if(xwiki.exists(docName)){n"+
   "            println(&#34;<br/>n<br/>nTo execute the next part, click <a class="wikicreatelink" href="/xwiki/bin/edit/Snippets/here%26%2362%3B%26%2338%3B%2334%3B+docName+%26%2338%3B?parent=Snippets.ReplaceWordsWithLinksSnippet"><span class="wikicreatelinktext">here>&#34;+doc Name+&</span><span class="wikicreatelinkqm">?</span></a>&#34;);n"+
    "
       }else{println(&#34;<br/>n<br/>nNo more parts to execute, done :)&#34;);}n"+
    "
   }else{n"+
    "
       println(&#34;<br/>n<br/>nWill update &#34;+success+&#34; pages&#34;);n"+
    "
       if(failed > 0){println(&#34;<br/>n<br/>nHashes failed for &#34;+failed+&#34; pages, those pages will be skipped.&#34;);}n"+
    "
       println(&#34;<br/>n<br/>nTo commit changes click <a class="wikicreatelink" href="/xwiki/bin/edit/commit%26%2362%3B%26%2338%3B%2334%3B+context.doc/fullName+%26%2338%3B?parent=Snippets.ReplaceWordsWithLinksSnippet"><span class="wikicreatelinktext">doc.full Name+&</span><span class="wikicreatelinkqm">?</span></a>&#34;);n"+
    "        if(request.get(&#34;verbose&#34;)!=null){}else{println(&#34;<br/
>n<br/>nTo see the changes with more verbosity, click <a class="wikicreatelink" href="/xwiki/bin/edit/verbose%26%2362%3B%26%2338%3B%2334%3B+context.doc/fullName+%26%2338%3B?parent=Snippets.ReplaceWordsWithLinksSnippet"><span class="wikicreatelinktext">doc.full Name+&</span><span class="wikicreatelinkqm">?</span></a>&#34;);}n"+
   "    }n"+
   "}catch(NoSuchAlgorithmException e){n"+
   "    System.out.println(&#34;ERROR: Algorithm not imported.&#34;);n"+
   "}n%"+">";<p/>
    String rollbackChangesFooter = "&#34;;nimport java.lang.*;n"+
    "import com.xpn.xwiki.doc.XWikiDocument;n"+
    "int counter = 0, index = 0, nextIndex = 0, success = 0, failed = 0, errors = 0;n"+
    "String docName = &#34;&#34;, strPreVer = &#34;&#34;;n"+
    "float prevVersion = 0, version = 0;n"+
    "XWikiDocument document, newdoc;n"+
    "wiki = context.context.getWiki();n"+
    "while(index < changes.length()){n"+
    "    nextIndex = changes.indexOf(&#34; &#34;,index);n"+
    "    docName = changes.substring(index,nextIndex);n"+
    "    document = wiki.getDocument(docName,context.context);n"+
    "    document.loadArchive(context.context);//makes getPreviousVersion() work w/
o null pointer error.n"+
    "
   index = nextIndex + 1;nextIndex = changes.indexOf(&#34; &#34;,index);n"+
    "
   if(document.getVersion().equals(&#34;1.1&#34;) || document.getVersion().equals(&#34;1.0&#34;)){strPrevVer = &#34;0.0&#34;;}else{strPrevVer = document.getPreviousVersion();}n"+
    "
   if(strPrevVer.equals(changes.substring(index,nextIndex))){n"+
    "
       success++;n"+
    "
       if(request.get(&#34;commit&#34;)!=null){n"+
    "
           newdoc = wiki.rollback(document, strPrevVer, context.context);n"+
    "
           println(&#34;* Document &#34;+docName+&#34; successfully rolled back.&#34;);n"+
    "
       }else{n"+
    "
           println(&#34;* Document &#34;+docName+&#34; rollback will succeed.&#34;);n"+
    "
       }n"+
    "
   }else{n"+
    "
       prevVersion = Float.parseFloat(strPrevVer);n"+
    "
       version = Float.parseFloat(changes.substring(index,nextIndex));n"+
    "
       if(prevVersion > version){n"+
    "
           failed++;n"+
    "
           if(request.get(&#34;force&#34;)!=null){n"+
    "
               if(request.get(&#34;commit&#34;)!=null){n"+
    "
                   newdoc = wiki.rollback(document, strPrevVer, context.context);n"+
    "
                   println(&#34;* Document &#34;+docName+&#34; <strong>rolled back by force, squashing changes.</strong>&#34;);n"+
    "                }n"+
    "            }else{n"+
    "                if(request.get(&#34;commit&#34;)!=null){n"+
    "                    println(&#34;* Document &#34;+docName+&#34; <strong>has been updated since changes were committed, skipping.</
strong>&#34;);n"+
    "
               }else{n"+
    "
                   println(&#34;* Document &#34;+docName+&#34; <strong>has been updated since changes were committed, it will be skipped unless you click force.</strong>&#34;);n"+
    "                }n"+
    "            }n"+
    "        }else if(prevVersion < version){n"+
    "            println(&#34;* ERROR: It appears that the changes have not been applied yet. Document: &#34;+docName);errors++;n"+
    "        }else{n"+
    "            println(&#34;Unknown error.&#34;);n"+
    "            return;n"+
    "        }n"+
    "    }n"+
    "    index = nextIndex + 1;nextIndex = changes.indexOf(&#34; &#34;,index);n"+
    "}n"+
    "if(request.get(&#34;commit&#34;)!=null){n"+
    "    println(&#34;<br/
>n<br/>nRollback committed. &#34;+success+&#34; pages rolled back successfully&#34;);n"+
    "    if(failed > 0){n"+
    "        if(request.get(&#34;force&#34;)!=null){n"+
    "           println(failed+&#34; pages forced back&#34;);n"+
    "        }else{println(failed+&#34; pages skipped because they have been modified since. &#34;);}n"+
    "    }n"+
    "    if(errors > 0){println(errors+&#34; pages skipped because they are not new enough to rollback&#34;);}n"+
    "    doc = context.context.getDoc();n"+
    "    docName = doc.getFullName();n"+
    "    index = Integer.parseInt(docName.substring(docName.lastIndexOf(&#34;_&#34;)+1))+1;n"+
    "    docName = docName.substring(0,docName.lastIndexOf(&#34;_&#34;))+&#34;_&#34;+index;n"+
    "    if(xwiki.exists(docName)){n"+
    "        println(&#34;<br/
>n<br/>nTo execute the next part, click <a class="wikicreatelink" href="/xwiki/bin/edit/Snippets/here%26%2362%3B%26%2338%3B%2334%3B+docName+%26%2338%3B?parent=Snippets.ReplaceWordsWithLinksSnippet"><span class="wikicreatelinktext">here>&#34;+doc Name+&</span><span class="wikicreatelinkqm">?</span></a>&#34;);n"+
   "    }else{println(&#34;<br/>n<br/>nNo more parts to execute, done :)&#34;);}n"+
   "}else{n"+
   "    if(success > 0){println(&#34;<br/>n<br/>n&#34;+success+&#34; pages will be rolled back successfully.&#34;)}n"+
   "    if(errors > 0){println(&#34;<br/>n<br/>n&#34;+errors+&#34; pages are not new enough to rollback, it looks like the changes have not been applied yet. DO NOT COMMIT unless you know what you are doing&#34;);}n"+
   "    println(&#34;<br/>n<br/>nTo commit rollback, click <a class="wikicreatelink" href="/xwiki/bin/edit/commit%26%2362%3B%26%2338%3B%2334%3B+doc/fullName+%26%2338%3B?parent=Snippets.ReplaceWordsWithLinksSnippet"><span class="wikicreatelinktext">full Name+&</span><span class="wikicreatelinkqm">?</span></a>.<br/>n<br/>n&#34;);n"+
    "
   if(failed > 0){n"+
    "
       println(failed+&#34; pages will be skipped because they have been modified since commit opperation. &#34;);n"+
    "
       println(&#34;<br/>n<br/>nTo commit rollback with force, click <a class="wikicreatelink" href="/xwiki/bin/edit/force%26%2362%3B%26%2338%3B%2334%3B+doc/fullName+%26%2338%3B%2334%3B?parent=Snippets.ReplaceWordsWithLinksSnippet"><span class="wikicreatelinktext">full Name+&#34;?commit&</span><span class="wikicreatelinkqm">?</span></a>.&#34;);n"+
    "    }n"+
    "}";<p/
>
    String readQuery = "select doc.fullName from XWikiDocument doc where (";
    String nameQuery = readQuery;
   while(counter < linkFrom.size()){
        readQuery += "doc.space = '"+linkFrom.get(counter)+"'";
        counter++;
       if(counter < linkFrom.size()){
            readQuery += " or ";
       }else{
            readQuery += ")";
       }
   }
    counter = 0;
   while(counter < noLinkFrom.size()){
        readQuery += " and doc.name != '"+noLinkFrom.get(counter)+"'";
        counter++;
   }
    counter = 0;
   while(counter < linkTo.size()){
        nameQuery += "doc.space = '"+linkTo.get(counter)+"'";
        counter++;
       if(counter < linkTo.size()){
            nameQuery += " or ";
       }else{
            nameQuery += ")";
       }
   }<p/>
    ArrayList<String> linkToFullNames = context.context.getWiki().getHibernateStore().search(nameQuery, 0, 0, context.context);
    ArrayList<String> readFullNames = context.context.getWiki().getHibernateStore().search(readQuery, 0, 0, context.context);<p/
>
    counter = 0;
   while(counter < linkToFullNames.size()){
        name = linkToFullNames.get(counter).substring(linkToFullNames.get(counter).indexOf('.')+1).replace("_"," ");
       if(name.length() < shortestMatch){
            linkToFullNames.remove(counter);
           continue;
       }
        namesCaps.add(name);
        names.add(name.toLowerCase());
        counter++;
   }<p/>
    counter = 0;
    while(counter < readFullNames.size()){
        fullName = readFullNames.get(counter);
        xdoc = xwiki.getDocument(fullName);
        content = xdoc.getContent();
        contentLower = content.toLowerCase();
        counter2 = 0;
        while(counter2 < names.size()){
            theIndex = contentLower.lastIndexOf(names.get(counter2));<p/
>
           thisPageThisName:
           while(theIndex!=-1){<p/>
                endIndex = theIndex+names.get(counter2).length();
                theMatch = content.substring(theIndex,endIndex);
                if(theIndex != 0 && !okayStart.contains(content.charAt(theIndex-1).toString())){
                    theIndex = contentLower.substring(0,theIndex).lastIndexOf(names.get(counter2));
                    //println("1");
                    continue;
                }
                if(endIndex < content.length() && !okayEnd.contains(content.charAt(endIndex).toString())){
                    theIndex = contentLower.substring(0,theIndex).lastIndexOf(names.get(counter2));
                    //println("2");
                    continue;
                }<p/
>
                beforeMatch = content.substring(0,theIndex);<p/>
                counter3 = 0;
                while((counter3+1) < skipBetween.size()){
                    temp = beforeMatch.lastIndexOf(skipBetween.get(counter3));
                    if(temp!=-1){
                        temp2 = beforeMatch.lastIndexOf(skipBetween.get(counter3+1));
                        if(temp2==-1 || temp2 < temp){
                            theIndex = contentLower.substring(0,temp).lastIndexOf(names.get(counter2));
                            continue thisPageThisName;
                        }
                    }
                    counter3 += 2;
                }
                nLineIndex = beforeMatch.lastIndexOf("n");
                if(nLineIndex !=-1){
                    beforeMatch = beforeMatch.substring(nLineIndex+1);
                }
                counter3 = 0;
                while(counter3 < skipAfter.size()){
                    if(beforeMatch.lastIndexOf(skipAfter.get(counter3))!=-1){
                        theIndex = contentLower.substring(0,theIndex).lastIndexOf(names.get(counter2));
                        continue thisPageThisName;
                    }
                    counter3++;
                }<p/
>
                change = new ArrayList<String>();
                change.add(fullName);
                change.add(new BigInteger(1,md.digest(content.getBytes())).toString(36));
                change.add(theIndex.toString());
                change.add(endIndex.toString());
               if(theIndex==content.indexOf(namesCaps.get(counter2))){
                   /<strong>println("</strong> <a class="wikicreatelink" href="/xwiki/bin/edit/Snippets/%26%2334%3B+fullName+%26?parent=Snippets.ReplaceWordsWithLinksSnippet"><span class="wikicreatelinktext">"+full Name+&</span><span class="wikicreatelinkqm">?</span></a> contains: \""+
                            beforeMatch.substring(Math.max(0,beforeMatch.length()-20))+"<strong>"+theMatch+"</strong>"+
                            content.substring(endIndex,Math.min(content.length(),endIndex+20))+"\" at indexes: "+
                            theIndex+" to "+endIndex+" substituting for: "+
                           "["+linkToFullNames.get(counter2)+"] (<a class="wikicreatelink" href="/xwiki/bin/edit/%26%2334%3B+linkToFullNames/get%28counter2%29+%26?parent=Snippets.ReplaceWordsWithLinksSnippet"><span class="wikicreatelinktext">get(counter2)+&</span><span class="wikicreatelinkqm">?</span></a>)");
                    */
                change.add("
<a class="wikicreatelink" href="/xwiki/bin/edit/%26%2334%3B+linkToFullNames/get%28counter2%29+%26?parent=Snippets.ReplaceWordsWithLinksSnippet"><span class="wikicreatelinktext">get(counter2)+&</span><span class="wikicreatelinkqm">?</span></a>");<p/>
               }else{
                   /<strong>println("</strong> <a class="wikicreatelink" href="/xwiki/bin/edit/Snippets/%26%2334%3B+fullName+%26?parent=Snippets.ReplaceWordsWithLinksSnippet"><span class="wikicreatelinktext">"+full Name+&</span><span class="wikicreatelinkqm">?</span></a> contains: \""+
                        beforeMatch.substring(Math.max(0,beforeMatch.length()-20))+"<strong>"+theMatch+"</strong>"+
                        content.substring(endIndex,Math.min(content.length(),endIndex+20))+"\" at indexes: "+
                        theIndex+" to "+endIndex+" substituting for: "+
                       "["+theMatch+">"+linkToFullNames.get(counter2)+"] (<a class="wikicreatelink" href="/xwiki/bin/edit/%26%2334%3B+theMatch+%26%2334%3B%26%2362%3B%26%2334%3B+linkToFullNames/get%28counter2%29+%26?parent=Snippets.ReplaceWordsWithLinksSnippet"><span class="wikicreatelinktext">get(counter2)+&</span><span class="wikicreatelinkqm">?</span></a>)");
                    */
                change.add("
<a class="wikicreatelink" href="/xwiki/bin/edit/%26%2334%3B+theMatch+%26%2334%3B%26%2362%3B%26%2334%3B+linkToFullNames/get%28counter2%29+%26?parent=Snippets.ReplaceWordsWithLinksSnippet"><span class="wikicreatelinktext">get(counter2)+&</span><span class="wikicreatelinkqm">?</span></a>");
                }
                change.add(xdoc.getVersion());
                changes.add(change);
                theIndex = contentLower.substring(0,theIndex).lastIndexOf(names.get(counter2));
            }
            counter2++;
        }<p/
>
       if(changes.size() > 0){
            counter2 = 0;
           while(counter2 < changes.size()){
                location = Integer.parseInt(changes.get(counter2).get(2));
                indexByLocation.put(location,counter2);
               if(location > largestLocation){
                    largestLocation = location;
               }
                counter2++;
           }
            counter2 = 0;
           while(counter2 <= largestLocation){//if two words interfear, the shorter word is removed.
               if(indexByLocation.containsKey(counter2)){
                    theIndex = indexByLocation.get(counter2);
                    thisEndLocation = Integer.parseInt(changes.get(theIndex).get(3));
                   if(endLocation > counter2){//if the end of the last word is past the beginning of the current word
                       changesDropped++;//something is going to be removed.
                       if((endLocation-location) > (thisEndLocation-counter2)){//if the last word was longer
                           indexByLocation.remove(counter2);//remove this word
                       }else{
                           //first check to make sure there isn't another (even longer) word which also interfears with "this" word
                           counter3 = counter2+1;
                           while(counter3 < thisEndLocation){
                               if(indexByLocation.containsKey(counter3)){
                                   if((thisEndLocation-counter2)<(Integer.parseInt(changes.get(indexByLocation.get(counter3)).get(3))-counter3)){
                                        indexByLocation.remove(counter2);//remove this word
                                       break;
                                   }else{//remove the word which is ahead of this word and is shorter
                                       indexByLocation.remove(counter3);
                                        changesDropped++;
                                   }
                               }
                                counter3++;
                           }
                           if(counter3 == thisEndLocation){
                                indexByLocation.remove(location);//remove the last word
                           }
                       }
                   }
                    location = counter2;
                    endLocation = thisEndLocation;
               }
                counter2++;
           }
            location = 0;
            endLocation = 0;
            change = changes.get(0);
            applyChange = " "+change.get(0)+" "+change.get(1)+" ";
            rollbackChange = change.get(0)+" "+change.get(5)+" ";<p/>
            counter2 = largestLocation;//to preserve the value of index numbers, changes must be made backwards
            while(counter2 > -1){
                if(indexByLocation.containsKey(counter2)){
                    change = changes.get(indexByLocation.get(counter2));
                    applyChange += change.get(2)+","+change.get(3)+change.get(4);
                }
                counter2--;
            }
            totalFound += changes.size();
            changes.clear();
            indexByLocation.clear();<p/
>
           if((applyChanges.length()+applyChange.length()) > 65535){
                newPage = xwiki.getDocument("temp.replaceWordsWithLinks_"+id+"_"+pageNum);
                newPage.doc.setContent(makeChangesHeader+applyChanges+makeChangesFooter);
                newPage.save();
                pageNum++;
                applyChanges = applyChange;
           }else{applyChanges += applyChange;}<p/>
            if((rollbackChanges.length()+rollbackChange.length()) > 65535){//just in case ;)
                newPage = xwiki.getDocument("temp.replaceWordsWithLinks_"+id+"_rollbackChanges_"+rbPageNum);
                newPage.doc.setContent(makeChangesHeader+rollbackChanges+rollbackChangesFooter);
                newPage.save();
                rbPageNum++;
                rollbackChanges = rollbackChange;
            }else{rollbackChanges += rollbackChange;}<p/
>
       }
        counter++;
   }<p/>
    if(applyChanges.length() > 0){
        newPage = xwiki.getDocument("temp.replaceWordsWithLinks_"+id+"_"+pageNum);
        newPage.doc.setContent(makeChangesHeader+applyChanges+makeChangesFooter);
        newPage.save();
        pageNum++;
    }
    if(rollbackChanges.length() > 0){
        newPage = xwiki.getDocument("temp.replaceWordsWithLinks_"+id+"_rollbackChanges_"+rbPageNum);
        newPage.doc.setContent(makeChangesHeader+rollbackChanges+rollbackChangesFooter);
        newPage.save();
        rbPageNum++;
    }<p/
>
    println("Success!nnRead "+readFullNames.size()+" documents looking for words which match any of "+linkToFullNames.size()+" document names.n");
    println("Found a total of "+totalFound+" possible changes and removed "+changesDropped+" because their locations intersected.nn");
    println("Auto-generated "+(pageNum-1)+" scripts to commit thiese changes, and "+(rbPageNum-1)+" scripts to remove them.n");
    counter = 1;
   while(counter < pageNum){
        println("* <a class="wikicreatelink" href="/xwiki/bin/edit/temp/replaceWordsWithLinks_%26%2334%3B+id+%26%2334%3B_%26%2334%3B+counter+%26?parent=Snippets.ReplaceWordsWithLinksSnippet"><span class="wikicreatelinktext">replace Words With Links_"+id+"_"+counter+&</span><span class="wikicreatelinkqm">?</span></a>");
        counter++;
    }
    counter = 1;
    while(counter < rbPageNum){
        println("
* <a class="wikicreatelink" href="/xwiki/bin/edit/temp/replaceWordsWithLinks_%26%2334%3B+id+%26%2334%3B_rollbackChanges_%26%2334%3B+counter+%26?parent=Snippets.ReplaceWordsWithLinksSnippet"><span class="wikicreatelinktext">replace Words With Links_"+id+"_rollback Changes_"+counter+&</span><span class="wikicreatelinkqm">?</span></a>");
        counter++;
   }<p/>
}else{
    if(request.get("go")!=null && skipBetween.size()%2==0){
        println("ERROR: array skipBetween must have even number of entries because they are handled in pairs.");
    }
    println("replaceAllWordsWithLinks: 0.01nn"+
    "by Caleb James DeLislenn"+
    "Looks through wiki db and finds words in pages which are the same as the names of pages and "+
    "Generates one or more scripts to replace the words with links to the pages. Also generates scripts to rollback all changes.nn"+
    "Depending on db size, can require LOTS of cpu to run, I would run it on a local copy of the db and copy the commit scripts to the live server.nn"+
    "Auto-generated commit script contains SHA-1 sums of content of all pages to be changed, "+
    "any page which has been updated since examination by this script will fail checksum and be skipped by commit script.nn");
    if(!xwiki.hasAdminRights()){
        println("You have to have admin permission to run this script. :(");
    }else{
        println(" <a class="wikicreatelink" href="/
xwiki/bin/edit/Click+here+to+run+script.%26%2362%3B%26%2334%3B+context.doc/fullName+%26?parent=Snippets.ReplaceWordsWithLinksSnippet"><span class="wikicreatelinktext">>"+context.doc.full Name+&</span><span class="wikicreatelinkqm">?</span></a>");
   }
}
%>

Get Connected