Skip to content
  • #!/bin/bash
    
    old_url=insights.tuhh.de
    new_url="http://localhost:8000"
    download_folder=archive
    
    # Remove exisiting archive
    if [ -d "$download_folder" ]; then
      rm -rf "$download_folder"
    fi
    
    # Download if not exists
    if [ ! -d "$old_url" ]; then
      # Download
      wget --mirror \
        --page-requisites \
        --convert-file-only \
        --adjust-extension --compression=auto --reject-regex "/search|/rss" \
        --no-if-modified-since \
        --no-check-certificate \
        --user-agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/68.0.3440.106 Safari/537.36" \
        "https://$old_url"
    fi
    
    # Make a copy if download exists
    if [ -d "$old_url" ]; then
      cp -R "$old_url" "$download_folder"
    fi
    
    # Swap URLs in all downloaded files
    find "./$download_folder" -type f -exec sed -i "s|https://${old_url}|${new_url}|g" {} +
    
    # Change references in files like index.html?p=123234.html to index.html
    find "./$download_folder" -type f -exec sed -i "s|index\.html%3Fp=[0-9]*\.html|index.html|g" {} +
    
    # Copy root index to language folder
    cp "./$download_folder/index.html" "./$download_folder/de/"
  • Really apricated your great effort and the coding updates are quite recommending. I hope after getting the help here we able to learn and improve our coding skills. I would like to suggest the top rated resume writing services to everyone for the valuable blog post here.

0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment