[SOLVED] Javascript challenge: a Facebook link cleaner?
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
- is static, in the sense that all of its source and code is contained in the same file
- is compatible with most old browsers as possible, so newer Javascript details or functions or whatever will possibly not be used in it. (so, at most HTML 4.01 and CSS 2 era things)
- its main function is to clean URLs that Facebook hides inside its own domain to track users that click on them (which is something I do not like at all to do, or to have my friends doing; but cleaning it manually for long URLs with special characters is painful)
An example URL (with some information manually edited, and broken in a few lines with a reasonable width) is:
I imagine that all the script need to do is to decode and print the u parameter in that URL.
The basic HTML source code that I would use to start this is:
Code:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"
"http://www.w3.org/TR/html4/strict.dtd">
<!-- This document was successfully checked as HTML 4.01 Strict! -->
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<link rel="stylesheet" media="screen" href="e.css">
<title>...</title>
</head>
<body>
<h1>URL cleaner:</h1>
<table>
<tr>
<td>
<form action="here.php" method="post">
<div><textarea cols=80 rows=20 name="t"></textarea></div>
<div><input type=submit name="clean" value="clean"></div>
</form>
</td>
</tr>
</table>
</body>
</html>
No javascript is there because I do not really know how to create it from scratch - I just "tune" little bits of what I eventually fiddle with.
Is there anyone around here that would like to make this? It is probably easy and fast to do with whomever programs in Javascript for a long time - and this is why I am lazy for trying and doing it myself.
Any tries there? Please know that I will make comments and tell everyone around here the possible "defects" it may have (for me and my planned narrowed-down-and-limited uses).
That page would load only a single time, and can even be cached locally. Since the script is contained in it, it would need no more network access. It would probably create a <p> (or similar) with the clean URL to be copied or used (it can even be a <a>). A greasemonkey script to all FB pages is beyond what I need, and would be wasting CPU cycles most of the time.
// Input: raw url
// Output: "cleaned" url
function parseUrl(url) {
var a = url.match(".*u=(.*?)&");
return (a && a[1]) ? decodeURIComponent(a[1]) : "";
}
// input : fromInput, toInput
// (form text input / text area)
// output: none
function cleaner(fromInput, toInput) {
var url = fromInput.value;
var a = url.match(".*u=(.*?)&");
toInput.value = "";
if (a && a[1]) toInput.value = decodeURIComponent(a[1]);
}
A note about the above code: it is very easy to clean and copy an URL from the above code: ctrl+v, tab, ctrl+a, ctrl+c. And in my favorite browser there is the ctrl+shift+v to paste and go!
Thread's summary (useful for reusing the thread's main results and informations, I took sometime to do that now)
Quote:
Originally Posted by dedec0
It worked as I posted there - but I did not test too much today. Thank you! :)
"Posted there": at post #7. We may simply copy all the code in this post and save it wherever we want: locally or in a page we want to have it. There are *no* other files it will access: no script, no style, no image. A simple and directed HTML 4.01 file, and it was also checked in W3C's validator, as indicated in its first lines. This file is saved (and given as) using UTF-8 encoding (and Vim editor will keep doing that due its modeline in the end) - so adding/changing any text in/for any language should be very easy, although I prefer it with almost no text (and in my saved file, its two strings are in my language, not in English, as given in #7).
Quote:
Originally Posted by dedec0
A note about the above code: it is very easy to clean and copy an URL from the above code: ctrl+v, tab, ctrl+a, ctrl+c. And in my favorite browser there is the ctrl+shift+v to paste and go! :)
The button "Clean URL" may not be necessary to use: My use of that util is simple: 1) I pasted the bad URL (copied from FB) in the first textarea, 2) press TAB to change the focus the the second textarea, 3) the second textarea is changed by the page's script to the hidden URL argument in the first, then I already CTRL+A CTRL+C and use it. My uses are either: doing a "ctrl+l ctrl+v enter"; or a "ctrl+l ctrl+shift+v"; or anything we aimed for the cleaned URL.
In #9, keefaz pointed 3 important things the page has, and them should be the crucial parts to be compatible with the browser we use the page in: the JS functions decodeURIComponent(), getElementById(), String.match(). These are present in browsers' versions I have from several years ago (and still use), so compatibility should not be something to make adjusts in that page.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.