```javascript // jQuery-lite: const q = (x,y=document) => y.querySelector(x) const qq = (x,y=document) => Array.from(y.querySelectorAll(x)) // For taking a chunk of a webpage and making it the whole page. const maxi = (x,y=document) => { let a; if( a = q(x,y) ) { document.body.innerHTML = a.innerHTML } } const kill = (x,y=document) => qq(x,y).forEach(x => x.remove()) ``` scraping example: get hrefs for all pdf links in a div element with class `download` ```javascript copy(Array.from(document.querySelectorAll("div.download a")) .map(x => x.getAttribute("href")) .filter(x => x.match(/aw/\.pdf/)) .join("\n")) ``` ## Assembling contents for headers The basic idea is this: ```javascript cs = [0,0,0,0,0,0] a = qq("h1, h2, h3, h4, h5, h6") a.forEach(x => { tag = x.tagName lvl = tag.substr(1)|0 for(let i=lvl;i<6;i++) { cs[i] = 0; } cs[lvl-1]++ console.log(cs.join(".")+" "+x.textContent) }) ``` Then we can: * See the highest level of heading (e.g. if there are only h2's and below) -- don't number things 0.1 0.2 etc * Make a list of `(element,headingLevel,counters)` and from counters we extract e.g. the first 3 numbers for a h3, assuming there is an h1, but e.g. if there are only h2's and below, we take the first 2 numbers after the first. * Assemble a contents page -- if we assign `element` to a property of the element in the contents, we can use `scrollIntoView` rather than a `a href='#name'` construct. We will use an option `contents=yes` to signify that the page is to build and display a contents page. We will use [CSS]() with an added class on the main content div to make things wider if we have a contents page.