Compare commits

..

11 Commits

Author SHA1 Message Date
mb
5b52666d9f adding some markdown markup 2019-09-11 20:12:43 +02:00
mb
36da59aaf0 adding some markdown markup 2019-09-11 20:11:24 +02:00
mb
b824daa420 Update 'README.md' 2019-09-11 19:56:06 +02:00
mb
3e492eefdb Update 'README.md' 2019-09-11 19:55:49 +02:00
mb
0e92ac740d Update 'README.md' 2019-09-11 19:53:34 +02:00
mb
b628c6bb05 small change in the readme text 2019-09-11 19:49:43 +02:00
mb
1d2ded309f adding a , 2019-09-11 19:47:19 +02:00
d0b8d337d5 removing the pump img 2019-09-11 19:46:05 +02:00
cf03fafd0a renaming all files to etherpump + adding a etherpump readme 2019-09-11 19:44:37 +02:00
95a021d405 adding python-dateutil to the requirements inside setup.py to enable pip install -e . when installing etherdump 2019-07-04 18:07:44 +01:00
f9bb4444e2
Add __PUBLISH__ logic
Closes https://gitlab.constantvzw.org/aa/etherdump/issues/3.

This allows for the following to be run:

    etherdump pull --publish-opt-in --all --pub mydump

And if `__PUBLISH__` is not present on the pads, then the pad will not
be archived. It is also possible to configure this magic word by
specifying the `--publish ...` option.
2019-03-06 10:17:18 +01:00
39 changed files with 1004 additions and 3051 deletions

16
.gitignore vendored
View File

@ -1,13 +1,7 @@
*.log
*.pyc
*egg-info*
*~
.etherpump
/p/
/publish/
build/ build/
dist/ *.pyc
index.html *~
padinfo.json
testing/
venv/ venv/
testing/
padinfo.json
.etherpump

View File

@ -1,14 +0,0 @@
default: style
format:
@poetry run black etherpump
sort:
@poetry run isort etherpump
lint:
@poetry run flake8 etherpump
style: format sort lint
.PHONY: style format sort lint

297
README.md
View File

@ -1,291 +1,150 @@
# etherpump etherpump
=========
[![PyPI version](https://badge.fury.io/py/etherpump.svg)](https://badge.fury.io/py/etherpump) *pumping text from etherpads into publications*
[![GPL license](https://img.shields.io/badge/license-GPL-brightgreen.svg)](https://git.vvvvvvaria.org/varia/etherpump/src/branch/master/LICENSE.txt)
_Pumping text from etherpads into publications_
A command-line utility that extends the multi writing and publishing functionalities of the [etherpad](http://etherpad.org/) by exporting the pads in multiple formats. A command-line utility that extends the multi writing and publishing functionalities of the [etherpad](http://etherpad.org/) by exporting the pads in multiple formats.
## Many pads, many networks Many pads, many networks
------------------------
_Etherpump_ is a friendly fork of [_etherdump_](https://gitlab.constantvzw.org/aa/etherdump), a command line tool written by [Michael Murtaugh](http://automatist.org/) that converts etherpad pages to files. This fork is made out of curiosities in the tool, a wish to study it and shared sparks of enthusiasm to use it in different situations within Varia. *Etherpump* is a fork of [*etherdump*](https://gitlab.constantvzw.org/aa/etherdump), a command line tool written by [Michael Murtaugh](http://automatist.org/) that converts etherpad pages to files. This fork is made out of curiosities in the tool, a wish to study it and shared sparks of enthusiasm to use it in different situations within Varia.
Etherpump is a stretched version of etherdump. It is a playground in which we would like to add features to the initial tool that diffuse actions of _dumping_ into _pumping_. So most of all, etherpump is a work-in-progress, exploring potential uses of etherpads to edit, structure and publish various types of content. Etherpump is a stretched version of etherdump. It is a playground in which we would like to add features to the initial tool that diffuse actions of *dumping* into *pumping*. So most of all, etherpump is a work-in-progress, exploring potential uses of etherpads to edit, structure and publish various types of content.
Added features are: Added features are:
- opt-in publishing with the `__PUBLISH__` magic word * opt-in publishing with the `__PUBLISH__` magic word
- the `publication` command, that listens to custom magic words such as `__RELEARN__` * the `publication` command, that listens to custom magic words such as `__RELEARN__`
See the [Change log / notes ](#change-log--notes) section for further changes. Etherdump is a tool that is used from the command line. It dumps all pads of one etherpad installation to a folder, saving them as different text files, such as plain text and HTML. It also creates an index file, that allows one to easily navigate through the list of pads. Etherdump follows a document-driven idea of publishing, which means that it converts pads as database entries into pads as files. This seems to be a redundant act of copying, but is actually an important in-between step that allows for many different publishing projects and experiments.
Etherpump is a tool that is used from the command line. It pumps all pads of one etherpad installation to a folder, saving them as different text files, such as plain text and HTML. It also creates an index file, that allows one to easily navigate through the list of pads. Etherpump follows a document-driven idea of publishing, which means that it converts pads as database entries into pads as files. This seems to be a redundant act of copying, but is actually an important in-between step that allows for many different publishing projects and experiments. We started to get to know etherdump through various editions of Relearn and/or the worksessions organized by Constant. Collaborative writing on an etherpad has been an important ingredient for these situations. The habit of using pads branched into the day-to-day practice of Varia, where we use etherpads for all sorts of things, ranging from organising remote-meetings with 10+ people, to writing and designing PDF documents collaboratively.
We started to get to know etherpump through various editions of Relearn and/or the worksessions organized by Constant. Collaborative writing on an etherpad has been an important ingredient for these situations. The habit of using pads branched into the day-to-day practice of Varia, where we use etherpads for all sorts of things, ranging from organising remote-meetings with 10+ people, to writing and designing PDF documents collaboratively. After installing etherdump on the Varia server, we collectively decided to not want to publish pads by default. Discussions in the group around the use of etherpads, privacy and ideas of what publishing means, led to a need to have etherdump only start the indexing work after it recognizes a `__PUBLISH__` marker on a pad. We decided to work on a `__PUBLISH__ vs. __NOPUBLISH__` branch of etherdump, which we now fork into **etherpump**.
After installing etherpump on the Varia server, we collectively decided to not want to publish pads by default. Discussions in the group around the use of etherpads, privacy and ideas of what publishing means, led to a need to have etherpump only start the indexing work after it recognizes a `__PUBLISH__` marker on a pad. We decided to work on a `__PUBLISH__ vs. __NOPUBLISH__` branch of etherdump, which we now fork into **etherpump**.
# Change log / notes Change log / notes
==================
**December 2020**
Added the `--magicwords` flag. Parsing and indexing of magic words is now
supported. See [etherpump.vvvvvvaria.org](https://etherpump.vvvvvvaria.org) for
more. This is still a work in progress.
Change `--connection` default setting to 50 to avoid overpowering modestly
powered servers.
**November 2020**
Releasing Etherpump 0.0.18!
Handled a bug that saved the same HTML content in multiple files. Disclaimer: resolved in a hacky way.
---
**October 2020**
Use the more friendly packaging tool [Poetry](https://python-poetry.org/) for publishing.
Further performance tweaks, informative logging and miscellaneous bug fixing.
Decolonize our Git praxis and use the `main` branch.
---
**January 2020**
Added experimental [trio](trio.readthedocs.io) and
[asks](https://asks.readthedocs.io/en/latest/index.html) support for the `pull`
command which enables pads to be processed concurrently. The default
`--connection` option is set to 20 which may overpower the target server. If in
doubt, set this to a lower number (like 5). This functionality is experimental,
be cautious and please report bugs!
Removed fancy progress bars for pulling because concurrent processing makes
that hard to track. For now, we simply output whichever padid we're finished
with.
---
**October 2019**
Improve `etherpump --help` handling to make it easier for new users.
Added the `python-dateutil` and `pypandoc` dependencies
Added a fancy progress bar with `tqdm` for long running `etherpump pull --all` calls
Started with the [experimental library API](#library-api-example).
---
**September 2019** **September 2019**
Forking _etherpump_ into _etherpump_. Forking *etherdump* into *etherpump*. (Work in progress!)
<https://git.vvvvvvaria.org/varia/etherpump> <https://git.vvvvvvaria.org/varia/etherpump>
Migrating the source code to Python 3. -----
Integrate PyPi publishing with setuptools.
---
**May - September 2019** **May - September 2019**
etherpump is used to produce the _Ruminating Relearn_ section of the Network Of One's Own 2 (NOOO2) publication. Etherdump is used to produce the *Ruminating Relearn* section of the Network Of One's Own 2 (NOOO2) publication.
A new command is added to make a web publication, based on the custom magic word `__RELEARN__`. A new command is added to make a web publication, based on the custom magic word `__RELEARN__`.
--- -----
**June 2019** **June 2019**
Multiple conversations around etherpump emerged during Relearn Curved in Varia, Rotterdam. Multiple conversations around etherdump emerged during Relearn Curved in Varia, Rotterdam.
Including the idea of executable pads (_etherhooks_), custom magic words, a federated snippet protocol (_etherstekje_) and more. Including the idea of executable pads (*etherhooks*), custom magic words, a federated snippet protocol (*etherstekje*) and more.
<https://varia.zone/relearn-2019.html> <https://varia.zone/relearn-2019.html>
--- -----
**April 2019** **April 2019**
Installation of etherpump on the Varia server. Installation of etherdump on the Varia server.
<https://etherpump.vvvvvvaria.org/> <https://etherdump.vvvvvvaria.org/>
--- -----
**March 2019** **March 2019**
The `__PUBLISH__ vs. __NOPUBLISH__` was added to the etherpump repository by _decentral1se_. The `__PUBLISH__ vs. __NOPUBLISH__` was added to the etherdump repository by *decentral1se*.
<https://gitlab.constantvzw.org/aa/etherpump/issues/3> <https://gitlab.constantvzw.org/aa/etherdump/issues/3>
--- -----
Originally designed for use at: [Constant](http://etherdump.constantvzw.org/). Originally designed for use at: [Constant](http://etherdump.constantvzw.org/).
More notes can be found in the [git repository of etherdump](https://gitlab.constantvzw.org/aa/etherdump). More notes can be found in the [git repository of etherdump](https://gitlab.constantvzw.org/aa/etherdump).
# Install etherpump
`$ pip install etherpump` Install etherpump
=================
Etherpump only supports Python >= 3.6. Requirements
-------------
## Command-line example * python3
* html5lib
* requests (settext)
* python-dateutil, jinja2 (used by the index subcommand)
Installation
-------------
`$ pip install python-dateutil jinja2 html5lib`
`$ python setup.py install`
Example
---------------
``` ```
$ mkdir mydump $ mkdir mydump
$ cd myddump $ cd myddump
$ etherpump init $ etherdump init
``` ```
The program then interactively asks some questions: The program then interactively asks some questions:
> Please type the URL of the etherpad (e.g. https://pad.vvvvvvaria.org): ```
> Please type the URL of the etherpad:
> https://pad.vvvvvvaria.org/
https://pad.vvvvvvaria.org/
```
The APIKEY is the contents of the file APIKEY.txt in the etherpad folder. The APIKEY is the contents of the file APIKEY.txt in the etherpad folder.
> Please paste the APIKEY: ```
> Please paste the APIKEY:
> xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
The settings are placed in a file called `.etherpump/settings.json` and are used (by default) by future commands.
## Common Workflows
### Text+Meta performance wrangling
If you have a lot of pads, you might want to try the following to speed things
up. This example is something we do at Varia. Firstly, you download all the
pads text + metadata as the only formats. This is likely what you want when
you're trying to work directly with the text. You can do that like so:
```bash
$ etherpump pull --text --meta --publish-opt-in
``` ```
The key here is to get the `--meta` so that etherpump is able to read quickly The settings are placed in a file called `.etherdump/settings.json` and are used (by default) by future commands.
skip it on the following run if there are no new revisions. So, in practice,
you get a slower first run and faster following runs as more pads are skipped
from actually doing a file system write to save the contents which we already
have.
## Library API Example Subcommands
----------
Etherpump can be used as a library. * init
* pull
* list
* listauthors
* gettext
* settext
* gethtml
* creatediffhtml
* revisionscount
* index
* deletepad
* publication (*etherpump*)
All commands can be imported and run programmatically. To get help on a subcommand:
```python `$ etherdump revisionscount --help`
>>> from etherpump.api import pull
>>> pull(['--text', '--meta', '--publish-opt-in'])
```
There is also a Magic Word interface. It supports the following API:
> magic_word(word, fresh)
- **word**: The magic word to match pad text against (e.g. `__PUB_CLUB__`)
- **fresh** (default: `True`): Whether or not run a `etherpump pull` each time
Here is an example:
```python
from etherpump.api import magic_word
@magic_word("__PUB_CLUB__", fresh=False)
def pub_club_texts(pads):
for name in pads:
print(pads[name]["txt"])
pub_club_texts() License
``` =======
`pads` is a dictionary which contains pad names as keys and pad text as values. GNU AFFERO GENERAL PUBLIC LICENSE, Version 3
Normally, the `fresh=False` is useful when you're hacking away and want to read
pad contents from the local file system and not over the network each time.
## Subcommands See `License.txt`
To see all available subcommands, run:
`$ etherpump --help`
For help on each individual subcommand, run:
`$ etherpump revisionscount --help`
## Publishing
Please use ["semver"](https://semver.org/) conventions for versions.
Here are the steps to follow (e.g. for a `0.1.3` release):
- Change the version number in the `etherpump/__init__.py` `__VERSION__` to `0.1.3`
- Change the version number in the `pyproject.toml` `version` field to `0.1.3`
- `git add . && git commit -m "Publish new 0.1.3 version" && git tag 0.1.3 && git push --tags`
- Run `poetry publish --build`
You should have a [PyPi](https://pypi.org/) account and be added as an owner/maintainer on the [etherpump package](https://pypi.org/project/etherpump/).
## Testing
It can be quite handy to run a very temporary local Etherpad instance to test against. This is possible with [Docker](https://docs.docker.com/get-docker/).
```bash
$ docker run -d --name etherpad -p 9001:9001 etherpad/etherpad
$ docker exec -ti etherpad cat APIKEY.txt;echo
```
Then you can `etherpump init` to that local Etherpad for experimentation and testing. You use `http://localhost:9001` as the pad URL.
Later on, you can remove the Etherpad with:
```bash
$ docker rm -f --volumes etherpad
```
## Maintenance utilities
Tools to help things stay tidy over time.
```bash
$ make
```
Please see the following links for further reading:
- [flake8](http://flake8.pycqa.org)
- [isort](https://isort.readthedocs.io)
- [black](https://black.readthedocs.io)
### Server Systers Situation
```
$ sudo -su systers
$ cd /var/www/etherpump
$ sh cron.sh
```
Served from `/etc/nginx/sites-enabled/etherpump.vvvvvvaria.conf`.
## Keeping track of Etherpad-lite
- [Etherpad-lite API documentation](https://etherpad.org/doc/v1.7.5/)
- [Etherpad-lite releases](https://github.com/ether/etherpad-lite/releases)
# License
GNU AFFERO GENERAL PUBLIC LICENSE, Version 3.
See [LICENSE](./LICENSE).

44
bin/etherpump Executable file
View File

@ -0,0 +1,44 @@
#!/usr/bin/env python3
from __future__ import print_function
import sys
usage = """Usage:
etherpump CMD
where CMD could be:
pull
index
dumpcsv
gettext
gethtml
creatediffhtml
list
listauthors
revisionscount
showmeta
html5tidy
For more information on each command try:
etherpump CMD --help
"""
try:
cmd = sys.argv[1]
if cmd.startswith("-"):
cmd = "sync"
args = sys.argv
else:
args = sys.argv[2:]
except IndexError:
print (usage)
sys.exit(0)
try:
# http://stackoverflow.com/questions/301134/dynamic-module-import-in-python
cmdmod = __import__("etherpump.commands.%s" % cmd, fromlist=["etherdump.commands"])
cmdmod.main(args)
except ImportError as e:
print ("Error performing command '{0}'\n(python said: {1})\n".format(cmd, e))
print (usage)

24
cron.sh
View File

@ -1,24 +0,0 @@
echo "Pulling pads..."
/usr/local/bin/poetry run etherpump pull \
--meta \
--html \
--text \
--magicwords \
--publish-opt-in \
--pub p \
--css ../stylesheet.css \
--fix-names \
--connection 5 \
--force
echo "Building the etherpump index..."
/usr/local/bin/poetry run etherpump index \
input \
p/*.meta.json \
--templatepath templates \
--title "Notes, __MAGICWORDS__, readers & more ..." \
--output index.html
echo "Done!"

View File

@ -0,0 +1,10 @@
Metadata-Version: 1.0
Name: etherpump
Version: 0.0.1
Summary: Etherpump an etherpad publishing system
Home-page: https://git.vvvvvvaria.org/varia/etherpump
Author: Varia members
Author-email: info@varia.zone
License: LICENSE.txt
Description: UNKNOWN
Platform: UNKNOWN

View File

@ -0,0 +1,35 @@
README.md
setup.py
bin/etherpump
etherpump/__init__.py
etherpump.egg-info/PKG-INFO
etherpump.egg-info/SOURCES.txt
etherpump.egg-info/dependency_links.txt
etherpump.egg-info/requires.txt
etherpump.egg-info/top_level.txt
etherpump/commands/__init__.py
etherpump/commands/appendmeta.py
etherpump/commands/common.py
etherpump/commands/creatediffhtml.py
etherpump/commands/deletepad.py
etherpump/commands/dumpcsv.py
etherpump/commands/gethtml.py
etherpump/commands/gettext.py
etherpump/commands/html5tidy.py
etherpump/commands/index.py
etherpump/commands/init.py
etherpump/commands/join.py
etherpump/commands/list.py
etherpump/commands/listauthors.py
etherpump/commands/publication.py
etherpump/commands/pull.py
etherpump/commands/revisionscount.py
etherpump/commands/sethtml.py
etherpump/commands/settext.py
etherpump/commands/showmeta.py
etherpump/commands/status.py
etherpump/data/templates/index.html
etherpump/data/templates/pad.html
etherpump/data/templates/pad_colors.html
etherpump/data/templates/pad_index.html
etherpump/data/templates/rss.xml

View File

@ -0,0 +1 @@

View File

@ -0,0 +1,2 @@
html5lib
jinja2

View File

@ -0,0 +1 @@
etherpump

View File

@ -1,100 +1,3 @@
#!/usr/bin/env python3
import os import os
import sys
DATAPATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), "data") DATAPATH = os.path.join(os.path.dirname(os.path.realpath(__file__)), "data")
__VERSION__ = "0.0.20"
def subcommands():
"""List all sub-commands for the `--help` output."""
output = []
subcommands = [
"creatediffhtml",
"deletepad",
"dumpcsv",
"gethtml",
"gettext",
"index",
"init",
"list",
"listauthors",
"publication",
"pull",
"revisionscount",
"sethtml",
"settext",
"showmeta",
]
for subcommand in subcommands:
try:
# http://stackoverflow.com/questions/301134/dynamic-module-import-in-python
doc = __import__(
"etherpump.commands.%s" % subcommand,
fromlist=["etherdump.commands"],
).__doc__
except ModuleNotFoundError:
doc = ""
output.append(f" {subcommand}: {doc}")
output.sort()
return "\n".join(output)
usage = """
_
| |
_ _|_ | | _ ,_ _ _ _ _ _
|/ | |/ \ |/ / | |/ \_| | / |/ |/ | |/ \_
|__/|_/| |_/|__/ |_/|__/ \_/|_/ | | |_/|__/
/| /|
\| \|
Usage:
etherpump CMD
where CMD could be:
{}
For more information on each command try:
etherpump CMD --help""".format(
subcommands()
)
def main():
try:
cmd = sys.argv[1]
if cmd.startswith("-"):
args = sys.argv
else:
args = sys.argv[2:]
if len(sys.argv) < 3:
if any(arg in args for arg in ["--help", "-h"]):
print(usage)
sys.exit(0)
elif any(arg in args for arg in ["--version", "-v"]):
print("etherpump {}".format(__VERSION__))
sys.exit(0)
except IndexError:
print(usage)
sys.exit(0)
try:
# http://stackoverflow.com/questions/301134/dynamic-module-import-in-python
cmdmod = __import__(
"etherpump.commands.%s" % cmd, fromlist=["etherdump.commands"]
)
cmdmod.main(args)
except ImportError as e:
print(
"Error performing command '{0}'\n(python said: {1})\n".format(
cmd, e
)
)
print(usage)

View File

@ -1,67 +0,0 @@
from functools import wraps
from os.path import exists
from pathlib import Path
from urllib.parse import urlencode
from etherpump.commands.common import getjson, loadpadinfo
from etherpump.commands.creatediffhtml import main as creatediffhtml # noqa
from etherpump.commands.deletepad import main as deletepad # noqa
from etherpump.commands.dumpcsv import main as dumpcsv # noqa
from etherpump.commands.gethtml import main as gethtml # noqa
from etherpump.commands.gettext import main as gettext # noqa
from etherpump.commands.index import main as index # noqa
from etherpump.commands.init import main # noqa
from etherpump.commands.init import main as init
from etherpump.commands.list import main as list # noqa
from etherpump.commands.listauthors import main as listauthors # noqa
from etherpump.commands.publication import main as publication # noqa
from etherpump.commands.pull import main as pull
from etherpump.commands.revisionscount import main as revisionscount # noqa
from etherpump.commands.sethtml import main as sethtml # noqa
from etherpump.commands.settext import main as settext # noqa
from etherpump.commands.showmeta import main as showmeta # noqa
def ensure_init():
path = Path(".etherpump/settings.json").absolute()
if not exists(path):
try:
main([])
except SystemExit:
pass
def get_pad_ids():
info = loadpadinfo(Path(".etherpump/settings.json"))
data = {"apikey": info["apikey"]}
url = info["localapiurl"] + "listAllPads?" + urlencode(data)
return getjson(url)["data"]["padIDs"]
def magic_word(word, fresh=True):
ensure_init()
if fresh:
pull(["--text", "--meta", "--publish-opt-in", "--publish", word])
pads = {}
pad_ids = get_pad_ids()
for pad_id in pad_ids:
path = Path("./p/{}.raw.txt".format(pad_id)).absolute()
try:
with open(path, "r") as handle:
text = handle.read()
if word in text:
pads[pad_id] = {}
pads[pad_id]["txt"] = text
except FileNotFoundError:
continue
def _magic_word(func):
@wraps(func)
def wrapper(*args, **kwargs):
return func(pads)
return wrapper
return _magic_word

View File

@ -1,8 +1,8 @@
#!/usr/bin/env python #!/usr/bin/env python
import json from __future__ import print_function
from argparse import ArgumentParser from argparse import ArgumentParser
import json, os
def main(args): def main(args):
p = ArgumentParser("") p = ArgumentParser("")
@ -18,6 +18,6 @@ def main(args):
ret.append(meta) ret.append(meta)
if args.indent: if args.indent:
print(json.dumps(ret, indent=args.indent)) print (json.dumps(ret, indent=args.indent))
else: else:
print(json.dumps(ret)) print (json.dumps(ret))

View File

@ -1,31 +1,40 @@
import json from __future__ import print_function
import os import re, os, json, sys
import re from math import ceil, floor
import sys
from html.entities import name2codepoint
from time import sleep from time import sleep
from urllib.parse import quote_plus, unquote_plus
from urllib.request import HTTPError, urlopen
import trio try:
# python2
from urlparse import urlparse, urlunparse
from urllib2 import urlopen, URLError, HTTPError
from urllib import urlencode
from urllib import quote_plus, unquote_plus
from htmlentitydefs import name2codepoint
input = raw_input
except ImportError:
# python3
from urllib.parse import urlparse, urlunparse, urlencode, quote_plus, unquote_plus
from urllib.request import urlopen, URLError, HTTPError
from html.entities import name2codepoint
groupnamepat = re.compile(r"^g\.(\w+)\$") groupnamepat = re.compile(r"^g\.(\w+)\$")
def splitpadname (padid):
def splitpadname(padid):
m = groupnamepat.match(padid) m = groupnamepat.match(padid)
if m: if m:
return (m.group(1), padid[m.end() :]) return(m.group(1), padid[m.end():])
else: else:
return ("", padid) return (u"", padid)
def padurl (padid, ):
def padurl(padid,):
return padid return padid
def padpath (padid, pub_path=u"", group_path=u"", normalize=False):
def padpath(padid, pub_path="", group_path="", normalize=False):
g, p = splitpadname(padid) g, p = splitpadname(padid)
# if type(g) == unicode:
# g = g.encode("utf-8")
# if type(p) == unicode:
# p = p.encode("utf-8")
p = quote_plus(p) p = quote_plus(p)
if normalize: if normalize:
p = p.replace(" ", "_") p = p.replace(" ", "_")
@ -38,8 +47,9 @@ def padpath(padid, pub_path="", group_path="", normalize=False):
else: else:
return os.path.join(pub_path, p) return os.path.join(pub_path, p)
def padpath2id (path):
def padpath2id(path): if type(path) == unicode:
path = path.encode("utf-8")
dd, p = os.path.split(path) dd, p = os.path.split(path)
gname = dd.split("/")[-1] gname = dd.split("/")[-1]
p = unquote_plus(p) p = unquote_plus(p)
@ -48,8 +58,7 @@ def padpath2id(path):
else: else:
return p.decode("utf-8") return p.decode("utf-8")
def getjson (url, max_retry=3, retry_sleep_time=3):
def getjson(url, max_retry=3, retry_sleep_time=3):
ret = {} ret = {}
ret["_retries"] = 0 ret["_retries"] = 0
while ret["_retries"] <= max_retry: while ret["_retries"] <= max_retry:
@ -67,47 +76,32 @@ def getjson(url, max_retry=3, retry_sleep_time=3):
except ValueError as e: except ValueError as e:
url = "http://localhost" + url url = "http://localhost" + url
except HTTPError as e: except HTTPError as e:
print("HTTPError {0}".format(e), file=sys.stderr) print ("HTTPError {0}".format(e), file=sys.stderr)
ret["_code"] = e.code ret["_code"] = e.code
ret["_retries"] += 1 ret["_retries"]+=1
if retry_sleep_time: if retry_sleep_time:
sleep(retry_sleep_time) sleep(retry_sleep_time)
return ret return ret
async def agetjson(session, url):
"""The asynchronous version of getjson."""
RETRY = 20
TIMEOUT = 10
ret = {}
ret["_retries"] = 0
try:
response = await session.get(url, timeout=TIMEOUT, retries=RETRY)
rurl = response.url
ret.update(response.json())
ret["_code"] = response.status_code
if rurl != url:
ret["_url"] = rurl
return ret
except Exception as e:
print("Failed to download {}, saw {}".format(url, str(e)))
return
def loadpadinfo(p): def loadpadinfo(p):
with open(p) as f: with open(p) as f:
info = json.load(f) info = json.load(f)
if "localapiurl" not in info: if 'localapiurl' not in info:
info["localapiurl"] = info.get("apiurl") info['localapiurl'] = info.get('apiurl')
return info return info
def progressbar (i, num, label="", file=sys.stderr):
p = float(i) / num
percentage = int(floor(p*100))
bars = int(ceil(p*20))
bar = ("*"*bars) + ("-"*(20-bars))
msg = u"\r{0} {1}/{2} {3}... ".format(bar, (i+1), num, label)
sys.stderr.write(msg)
sys.stderr.flush()
# Python developer Fredrik Lundh (author of elementtree, among other things)
# has such a function on his website, which works with decimal, hex and named
# entities:
# Python developer Fredrik Lundh (author of elementtree, among other things) has such a function on his website, which works with decimal, hex and named entities:
## ##
# Removes HTML or XML character references and entities from a text string. # Removes HTML or XML character references and entities from a text string.
# #
@ -120,26 +114,17 @@ def unescape(text):
# character reference # character reference
try: try:
if text[:3] == "&#x": if text[:3] == "&#x":
return chr(int(text[3:-1], 16)) return unichr(int(text[3:-1], 16))
else: else:
return chr(int(text[2:-1])) return unichr(int(text[2:-1]))
except ValueError: except ValueError:
pass pass
else: else:
# named entity # named entity
try: try:
text = chr(name2codepoint[text[1:-1]]) text = unichr(name2codepoint[text[1:-1]])
except KeyError: except KeyError:
pass pass
return text # leave as is return text # leave as is
return re.sub("&#?\w+;", fixup, text) return re.sub("&#?\w+;", fixup, text)
def istty():
return sys.stdout.isatty() and os.environ.get("TERM") != "dumb"
def chunks(lst, n):
for i in range(0, len(lst), n):
yield lst[i : i + n]

View File

@ -1,31 +1,17 @@
"""Calls the createDiffHTML API function for the given padid""" from __future__ import print_function
import json
from argparse import ArgumentParser from argparse import ArgumentParser
from urllib.error import HTTPError, URLError import json
from urllib.parse import urlencode from urllib import urlencode
from urllib.request import urlopen from urllib2 import urlopen, HTTPError, URLError
def main(args): def main(args):
p = ArgumentParser( p = ArgumentParser("calls the createDiffHTML API function for the given padid")
"calls the createDiffHTML API function for the given padid"
)
p.add_argument("padid", help="the padid") p.add_argument("padid", help="the padid")
p.add_argument( p.add_argument("--padinfo", default=".etherpump/settings.json", help="settings, default: .etherdump/settings.json")
"--padinfo",
default=".etherpump/settings.json",
help="settings, default: .etherdump/settings.json",
)
p.add_argument("--showurl", default=False, action="store_true") p.add_argument("--showurl", default=False, action="store_true")
p.add_argument( p.add_argument("--format", default="text", help="output format, can be: text, json; default: text")
"--format", p.add_argument("--rev", type=int, default=None, help="revision, default: latest")
default="text",
help="output format, can be: text, json; default: text",
)
p.add_argument(
"--rev", type=int, default=None, help="revision, default: latest"
)
args = p.parse_args(args) args = p.parse_args(args)
with open(args.padinfo) as f: with open(args.padinfo) as f:
@ -33,20 +19,20 @@ def main(args):
apiurl = info.get("apiurl") apiurl = info.get("apiurl")
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info) # apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
data = {} data = {}
data["apikey"] = info["apikey"] data['apikey'] = info['apikey']
data["padID"] = args.padid data['padID'] = args.padid
data["startRev"] = "0" data['startRev'] = "0"
if args.rev != None: if args.rev != None:
data["rev"] = args.rev data['rev'] = args.rev
requesturl = apiurl + "createDiffHTML?" + urlencode(data) requesturl = apiurl+'createDiffHTML?'+urlencode(data)
if args.showurl: if args.showurl:
print(requesturl) print (requesturl)
else: else:
try: try:
results = json.load(urlopen(requesturl))["data"] results = json.load(urlopen(requesturl))['data']
if args.format == "json": if args.format == "json":
print(json.dumps(results)) print (json.dumps(results))
else: else:
print(results["html"]) print (results['html'].encode("utf-8"))
except HTTPError as e: except HTTPError as e:
pass pass

View File

@ -1,41 +1,32 @@
"""Calls the getText API function for the given padid""" from __future__ import print_function
import json
from argparse import ArgumentParser from argparse import ArgumentParser
from urllib.error import HTTPError, URLError import json
from urllib.parse import urlencode from urllib import urlencode
from urllib.request import urlopen from urllib2 import urlopen, HTTPError, URLError
def main(args): def main(args):
p = ArgumentParser("calls the getText API function for the given padid") p = ArgumentParser("calls the getText API function for the given padid")
p.add_argument("padid", help="the padid") p.add_argument("padid", help="the padid")
p.add_argument( p.add_argument("--padinfo", default=".etherpump/settings.json", help="settings, default: .etherdump/settings.json")
"--padinfo",
default=".etherpump/settings.json",
help="settings, default: .etherdump/settings.json",
)
p.add_argument("--showurl", default=False, action="store_true") p.add_argument("--showurl", default=False, action="store_true")
p.add_argument( p.add_argument("--format", default="text", help="output format, can be: text, json; default: text")
"--format",
default="text",
help="output format, can be: text, json; default: text",
)
args = p.parse_args(args) args = p.parse_args(args)
with open(args.padinfo) as f: with open(args.padinfo) as f:
info = json.load(f) info = json.load(f)
apiurl = info.get("apiurl") apiurl = info.get("apiurl")
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
data = {} data = {}
data["apikey"] = info["apikey"] data['apikey'] = info['apikey']
data["padID"] = args.padid data['padID'] = args.padid # is utf-8 encoded
requesturl = apiurl + "deletePad?" + urlencode(data) requesturl = apiurl+'deletePad?'+urlencode(data)
if args.showurl: if args.showurl:
print(requesturl) print (requesturl)
else: else:
results = json.load(urlopen(requesturl)) results = json.load(urlopen(requesturl))
if args.format == "json": if args.format == "json":
print(json.dumps(results)) print (json.dumps(results))
else: else:
if results["data"]: if results['data']:
print(results["data"]["text"]) print (results['data']['text'].encode("utf-8"))

View File

@ -1,15 +1,11 @@
"""Dumps a CSV of all pads""" from __future__ import print_function
import json
import re
import sys
from argparse import ArgumentParser from argparse import ArgumentParser
from csv import writer import sys, json, re
from datetime import datetime from datetime import datetime
from urllib import urlencode
from urllib2 import urlopen, HTTPError, URLError
from csv import writer
from math import ceil, floor from math import ceil, floor
from urllib.error import HTTPError, URLError
from urllib.parse import urlencode
from urllib.request import urlopen
""" """
Dumps a CSV of all pads with columns Dumps a CSV of all pads with columns
@ -19,88 +15,69 @@ padid, groupid, revisions, lastedited, author_ids
groupid is without (g. $) groupid is without (g. $)
revisions is an integral number of edits revisions is an integral number of edits
lastedited is ISO8601 formatted lastedited is ISO8601 formatted
author_ids is a space delimited list of internal author IDs author_ids is a space delimited list of internal author IDs
""" """
groupnamepat = re.compile(r"^g\.(\w+)\$") groupnamepat = re.compile(r"^g\.(\w+)\$")
out = writer(sys.stdout) out = writer(sys.stdout)
def jsonload (url):
def jsonload(url):
f = urlopen(url) f = urlopen(url)
data = f.read() data = f.read()
f.close() f.close()
return json.loads(data) return json.loads(data)
def main (args):
def main(args):
p = ArgumentParser("outputs a CSV of information all all pads") p = ArgumentParser("outputs a CSV of information all all pads")
p.add_argument( p.add_argument("--padinfo", default=".etherpump/settings.json", help="settings, default: .etherdump/settings.json")
"--padinfo", p.add_argument("--zerorevs", default=False, action="store_true", help="include pads with zero revisions, default: False")
default=".etherpump/settings.json",
help="settings, default: .etherdump/settings.json",
)
p.add_argument(
"--zerorevs",
default=False,
action="store_true",
help="include pads with zero revisions, default: False",
)
args = p.parse_args(args) args = p.parse_args(args)
with open(args.padinfo) as f: with open(args.padinfo) as f:
info = json.load(f) info = json.load(f)
apiurl = info.get("apiurl") apiurl = info.get("apiurl")
data = {} data = {}
data["apikey"] = info["apikey"] data['apikey'] = info['apikey']
requesturl = apiurl + "listAllPads?" + urlencode(data) requesturl = apiurl+'listAllPads?'+urlencode(data)
padids = jsonload(requesturl)["data"]["padIDs"] padids = jsonload(requesturl)['data']['padIDs']
padids.sort() padids.sort()
numpads = len(padids) numpads = len(padids)
maxmsglen = 0 maxmsglen = 0
count = 0 count = 0
out.writerow(("padid", "groupid", "lastedited", "revisions", "author_ids")) out.writerow(("padid", "groupid", "lastedited", "revisions", "author_ids"))
for i, padid in enumerate(padids): for i, padid in enumerate(padids):
p = float(i) / numpads p = (float(i) / numpads)
percentage = int(floor(p * 100)) percentage = int(floor(p*100))
bars = int(ceil(p * 20)) bars = int(ceil(p*20))
bar = ("*" * bars) + ("-" * (20 - bars)) bar = ("*"*bars) + ("-"*(20-bars))
msg = "\r{0} {1}/{2} {3}... ".format(bar, (i + 1), numpads, padid) msg = u"\r{0} {1}/{2} {3}... ".format(bar, (i+1), numpads, padid)
if len(msg) > maxmsglen: if len(msg) > maxmsglen:
maxmsglen = len(msg) maxmsglen = len(msg)
sys.stderr.write("\r{0}".format(" " * maxmsglen)) sys.stderr.write("\r{0}".format(" "*maxmsglen))
sys.stderr.write(msg) sys.stderr.write(msg.encode("utf-8"))
sys.stderr.flush() sys.stderr.flush()
m = groupnamepat.match(padid) m = groupnamepat.match(padid)
if m: if m:
groupname = m.group(1) groupname = m.group(1)
padidnogroup = padid[m.end() :] padidnogroup = padid[m.end():]
else: else:
groupname = "" groupname = u""
padidnogroup = padid padidnogroup = padid
data["padID"] = padid data['padID'] = padid.encode("utf-8")
revisions = jsonload(apiurl + "getRevisionsCount?" + urlencode(data))[ revisions = jsonload(apiurl+'getRevisionsCount?'+urlencode(data))['data']['revisions']
"data"
]["revisions"]
if (revisions == 0) and not args.zerorevs: if (revisions == 0) and not args.zerorevs:
continue continue
lastedited_raw = jsonload(apiurl + "getLastEdited?" + urlencode(data))[
"data" lastedited_raw = jsonload(apiurl+'getLastEdited?'+urlencode(data))['data']['lastEdited']
]["lastEdited"] lastedited_iso = datetime.fromtimestamp(int(lastedited_raw)/1000).isoformat()
lastedited_iso = datetime.fromtimestamp( author_ids = jsonload(apiurl+'listAuthorsOfPad?'+urlencode(data))['data']['authorIDs']
int(lastedited_raw) / 1000 author_ids = u" ".join(author_ids).encode("utf-8")
).isoformat() out.writerow((padidnogroup.encode("utf-8"), groupname.encode("utf-8"), revisions, lastedited_iso, author_ids))
author_ids = jsonload(apiurl + "listAuthorsOfPad?" + urlencode(data))[
"data"
]["authorIDs"]
author_ids = " ".join(author_ids)
out.writerow(
(padidnogroup, groupname, revisions, lastedited_iso, author_ids)
)
count += 1 count += 1
print("\nWrote {0} rows...".format(count), file=sys.stderr) print("\nWrote {0} rows...".format(count), file=sys.stderr)

View File

@ -1,29 +1,17 @@
"""Calls the getHTML API function for the given padid""" from __future__ import print_function
import json
from argparse import ArgumentParser from argparse import ArgumentParser
from urllib.error import HTTPError, URLError import json
from urllib.parse import urlencode from urllib import urlencode
from urllib.request import urlopen from urllib2 import urlopen, HTTPError, URLError
def main(args): def main(args):
p = ArgumentParser("calls the getHTML API function for the given padid") p = ArgumentParser("calls the getHTML API function for the given padid")
p.add_argument("padid", help="the padid") p.add_argument("padid", help="the padid")
p.add_argument( p.add_argument("--padinfo", default=".etherpump/settings.json", help="settings, default: .etherdump/settings.json")
"--padinfo",
default=".etherpump/settings.json",
help="settings, default: .etherdump/settings.json",
)
p.add_argument("--showurl", default=False, action="store_true") p.add_argument("--showurl", default=False, action="store_true")
p.add_argument( p.add_argument("--format", default="text", help="output format, can be: text, json; default: text")
"--format", p.add_argument("--rev", type=int, default=None, help="revision, default: latest")
default="text",
help="output format, can be: text, json; default: text",
)
p.add_argument(
"--rev", type=int, default=None, help="revision, default: latest"
)
args = p.parse_args(args) args = p.parse_args(args)
with open(args.padinfo) as f: with open(args.padinfo) as f:
@ -31,16 +19,16 @@ def main(args):
apiurl = info.get("apiurl") apiurl = info.get("apiurl")
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info) # apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
data = {} data = {}
data["apikey"] = info["apikey"] data['apikey'] = info['apikey']
data["padID"] = args.padid data['padID'] = args.padid
if args.rev != None: if args.rev != None:
data["rev"] = args.rev data['rev'] = args.rev
requesturl = apiurl + "getHTML?" + urlencode(data) requesturl = apiurl+'getHTML?'+urlencode(data)
if args.showurl: if args.showurl:
print(requesturl) print (requesturl)
else: else:
results = json.load(urlopen(requesturl))["data"] results = json.load(urlopen(requesturl))['data']
if args.format == "json": if args.format == "json":
print(json.dumps(results)) print (json.dumps(results))
else: else:
print(results["html"]) print (results['html'].encode("utf-8"))

View File

@ -1,29 +1,23 @@
"""Calls the getText API function for the given padid""" from __future__ import print_function
import json
import sys
from argparse import ArgumentParser from argparse import ArgumentParser
from urllib.parse import urlencode import json, sys
from urllib.request import HTTPError, URLError, urlopen try:
# python2
from urllib2 import urlopen, URLError, HTTPError
from urllib import urlencode
except ImportError:
# python3
from urllib.parse import urlencode
from urllib.request import urlopen, URLError, HTTPError
def main(args): def main(args):
p = ArgumentParser("calls the getText API function for the given padid") p = ArgumentParser("calls the getText API function for the given padid")
p.add_argument("padid", help="the padid") p.add_argument("padid", help="the padid")
p.add_argument( p.add_argument("--padinfo", default=".etherpump/settings.json", help="settings, default: .etherdump/settings.json")
"--padinfo",
default=".etherpump/settings.json",
help="settings, default: .etherdump/settings.json",
)
p.add_argument("--showurl", default=False, action="store_true") p.add_argument("--showurl", default=False, action="store_true")
p.add_argument( p.add_argument("--format", default="text", help="output format, can be: text, json; default: text")
"--format", p.add_argument("--rev", type=int, default=None, help="revision, default: latest")
default="text",
help="output format, can be: text, json; default: text",
)
p.add_argument(
"--rev", type=int, default=None, help="revision, default: latest"
)
args = p.parse_args(args) args = p.parse_args(args)
with open(args.padinfo) as f: with open(args.padinfo) as f:
@ -31,19 +25,19 @@ def main(args):
apiurl = info.get("apiurl") apiurl = info.get("apiurl")
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info) # apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
data = {} data = {}
data["apikey"] = info["apikey"] data['apikey'] = info['apikey']
data["padID"] = args.padid # is utf-8 encoded data['padID'] = args.padid # is utf-8 encoded
if args.rev != None: if args.rev != None:
data["rev"] = args.rev data['rev'] = args.rev
requesturl = apiurl + "getText?" + urlencode(data) requesturl = apiurl+'getText?'+urlencode(data)
if args.showurl: if args.showurl:
print(requesturl) print (requesturl)
else: else:
resp = urlopen(requesturl).read() resp = urlopen(requesturl).read()
resp = resp.decode("utf-8") resp = resp.decode("utf-8")
results = json.loads(resp) results = json.loads(resp)
if args.format == "json": if args.format == "json":
print(json.dumps(results)) print (json.dumps(results))
else: else:
if results["data"]: if results['data']:
sys.stdout.write(results["data"]["text"]) sys.stdout.write(results['data']['text'])

View File

@ -1,31 +1,28 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
from __future__ import print_function
import os
import sys
from argparse import ArgumentParser
from xml.etree import ElementTree as ET
from html5lib import parse from html5lib import parse
import os, sys
from argparse import ArgumentParser
from xml.etree import ElementTree as ET
def etree_indent(elem, level=0): def etree_indent(elem, level=0):
i = "\n" + level * " " i = "\n" + level*" "
if len(elem): if len(elem):
if not elem.text or not elem.text.strip(): if not elem.text or not elem.text.strip():
elem.text = i + " " elem.text = i + " "
if not elem.tail or not elem.tail.strip(): if not elem.tail or not elem.tail.strip():
elem.tail = i elem.tail = i
for elem in elem: for elem in elem:
etree_indent(elem, level + 1) etree_indent(elem, level+1)
if not elem.tail or not elem.tail.strip(): if not elem.tail or not elem.tail.strip():
elem.tail = i elem.tail = i
else: else:
if level and (not elem.tail or not elem.tail.strip()): if level and (not elem.tail or not elem.tail.strip()):
elem.tail = i elem.tail = i
def get_link_type (url):
def get_link_type(url):
lurl = url.lower() lurl = url.lower()
if lurl.endswith(".html") or lurl.endswith(".htm"): if lurl.endswith(".html") or lurl.endswith(".htm"):
return "text/html" return "text/html"
@ -40,17 +37,13 @@ def get_link_type(url):
elif lurl.endswith(".js") or lurl.endswith(".jsonp"): elif lurl.endswith(".js") or lurl.endswith(".jsonp"):
return "text/javascript" return "text/javascript"
def pluralize (x):
def pluralize(x):
if type(x) == list or type(x) == tuple: if type(x) == list or type(x) == tuple:
return x return x
else: else:
return (x,) return (x,)
def html5tidy (doc, charset="utf-8", title=None, scripts=None, links=None, indent=False):
def html5tidy(
doc, charset="utf-8", title=None, scripts=None, links=None, indent=False
):
if scripts: if scripts:
script_srcs = [x.attrib.get("src") for x in doc.findall(".//script")] script_srcs = [x.attrib.get("src") for x in doc.findall(".//script")]
for src in pluralize(scripts): for src in pluralize(scripts):
@ -63,30 +56,21 @@ def html5tidy(
for elt in doc.findall(".//link"): for elt in doc.findall(".//link"):
href = elt.attrib.get("href") href = elt.attrib.get("href")
if href: if href:
existinglinks[href] = elt existinglinks[href] = elt
for link in links: for link in links:
linktype = link.get("type") or get_link_type(link["href"]) linktype = link.get("type") or get_link_type(link["href"])
if link["href"] in existinglinks: if link["href"] in existinglinks:
elt = existinglinks[link["href"]] elt = existinglinks[link["href"]]
elt.attrib["rel"] = link["rel"] elt.attrib["rel"] = link["rel"]
else: else:
elt = ET.SubElement( elt = ET.SubElement(doc.find(".//head"), "link", href=link["href"], rel=link["rel"])
doc.find(".//head"),
"link",
href=link["href"],
rel=link["rel"],
)
if linktype: if linktype:
elt.attrib["type"] = linktype elt.attrib["type"] = linktype
if "title" in link: if "title" in link:
elt.attrib["title"] = link["title"] elt.attrib["title"] = link["title"]
if charset: if charset:
meta_charsets = [ meta_charsets = [x.attrib.get("charset") for x in doc.findall(".//meta") if x.attrib.get("charset") != None]
x.attrib.get("charset")
for x in doc.findall(".//meta")
if x.attrib.get("charset") != None
]
if not meta_charsets: if not meta_charsets:
meta = ET.SubElement(doc.find(".//head"), "meta", charset=charset) meta = ET.SubElement(doc.find(".//head"), "meta", charset=charset)
@ -95,89 +79,33 @@ def html5tidy(
if not titleelt: if not titleelt:
titleelt = ET.SubElement(doc.find(".//head"), "title") titleelt = ET.SubElement(doc.find(".//head"), "title")
titleelt.text = title titleelt.text = title
if indent: if indent:
etree_indent(doc) etree_indent(doc)
return doc return doc
def main (args):
def main(args):
p = ArgumentParser("") p = ArgumentParser("")
p.add_argument("input", nargs="?", default=None) p.add_argument("input", nargs="?", default=None)
p.add_argument("--indent", default=False, action="store_true") p.add_argument("--indent", default=False, action="store_true")
p.add_argument( p.add_argument("--mogrify", default=False, action="store_true", help="modify file in place")
"--mogrify", p.add_argument("--method", default="html", help="method, default: html, values: html, xml, text")
default=False,
action="store_true",
help="modify file in place",
)
p.add_argument(
"--method",
default="html",
help="method, default: html, values: html, xml, text",
)
p.add_argument("--output", default=None, help="") p.add_argument("--output", default=None, help="")
p.add_argument("--title", default=None, help="ensure/add title tag in head") p.add_argument("--title", default=None, help="ensure/add title tag in head")
p.add_argument( p.add_argument("--charset", default="utf-8", help="ensure/add meta tag with charset")
"--charset", default="utf-8", help="ensure/add meta tag with charset" p.add_argument("--script", action="append", default=[], help="ensure/add script tag")
)
p.add_argument(
"--script", action="append", default=[], help="ensure/add script tag"
)
# <link>s, see https://www.w3.org/TR/html5/links.html#links # <link>s, see https://www.w3.org/TR/html5/links.html#links
p.add_argument( p.add_argument("--stylesheet", action="append", default=[], help="ensure/add style link")
"--stylesheet", p.add_argument("--alternate", action="append", default=[], nargs="+", help="ensure/add alternate links (optionally followed by a title and type)")
action="append", p.add_argument("--next", action="append", default=[], nargs="+", help="ensure/add alternate link")
default=[], p.add_argument("--prev", action="append", default=[], nargs="+", help="ensure/add alternate link")
help="ensure/add style link", p.add_argument("--search", action="append", default=[], nargs="+", help="ensure/add search link")
) p.add_argument("--rss", action="append", default=[], nargs="+", help="ensure/add alternate link of type application/rss+xml")
p.add_argument( p.add_argument("--atom", action="append", default=[], nargs="+", help="ensure/add alternate link of type application/atom+xml")
"--alternate",
action="append",
default=[],
nargs="+",
help="ensure/add alternate links (optionally followed by a title and type)",
)
p.add_argument(
"--next",
action="append",
default=[],
nargs="+",
help="ensure/add alternate link",
)
p.add_argument(
"--prev",
action="append",
default=[],
nargs="+",
help="ensure/add alternate link",
)
p.add_argument(
"--search",
action="append",
default=[],
nargs="+",
help="ensure/add search link",
)
p.add_argument(
"--rss",
action="append",
default=[],
nargs="+",
help="ensure/add alternate link of type application/rss+xml",
)
p.add_argument(
"--atom",
action="append",
default=[],
nargs="+",
help="ensure/add alternate link of type application/atom+xml",
)
args = p.parse_args(args) args = p.parse_args(args)
links = [] links = []
def add_links (links, items, rel, _type=None):
def add_links(links, items, rel, _type=None):
for href in items: for href in items:
d = {} d = {}
d["rel"] = rel d["rel"] = rel
@ -200,7 +128,6 @@ def main(args):
d["href"] = href d["href"] = href
links.append(d) links.append(d)
for rel in ("stylesheet", "alternate", "next", "prev", "search"): for rel in ("stylesheet", "alternate", "next", "prev", "search"):
add_links(links, getattr(args, rel), rel) add_links(links, getattr(args, rel), rel)
for item in args.rss: for item in args.rss:
@ -217,33 +144,27 @@ def main(args):
doc = parse(fin, treebuilder="etree", namespaceHTMLElements=False) doc = parse(fin, treebuilder="etree", namespaceHTMLElements=False)
if fin != sys.stdin: if fin != sys.stdin:
fin.close() fin.close()
html5tidy( html5tidy(doc, scripts=args.script, links=links, title=args.title, indent=args.indent)
doc,
scripts=args.script,
links=links,
title=args.title,
indent=args.indent,
)
# OUTPUT # OUTPUT
tmppath = None tmppath = None
if args.output: if args.output:
fout = open(args.output, "w") fout = open(args.output, "w")
elif args.mogrify: elif args.mogrify:
tmppath = args.input + ".tmp" tmppath = args.input+".tmp"
fout = open(tmppath, "w") fout = open(tmppath, "w")
else: else:
fout = sys.stdout fout = sys.stdout
print(ET.tostring(doc, method=args.method, encoding="unicode"), file=fout) print (ET.tostring(doc, method=args.method, encoding="unicode"), file=fout)
if fout != sys.stdout: if fout != sys.stdout:
fout.close() fout.close()
if tmppath: if tmppath:
os.rename(args.input, args.input + "~") os.rename(args.input, args.input+"~")
os.rename(tmppath, args.input) os.rename(tmppath, args.input)
if __name__ == "__main__": if __name__ == "__main__":
main(sys.argv) main(sys.argv)

View File

@ -1,17 +1,23 @@
"""Generate pages from etherpumps using a template""" from __future__ import print_function
import json
import os
import re
import sys
import time
from argparse import ArgumentParser from argparse import ArgumentParser
import sys, json, re, os, time
from datetime import datetime from datetime import datetime
from urllib.parse import urlparse, urlunparse
import dateutil.parser import dateutil.parser
from jinja2 import Environment, FileSystemLoader
from etherpump.commands.common import * # noqa try:
# python2
from urllib2 import urlopen, URLError, HTTPError
from urllib import urlencode
from urlparse import urlparse, urlunparse
except ImportError:
# python3
from urllib.parse import urlparse, urlunparse, urlencode, quote
from urllib.request import urlopen, URLError, HTTPError
from jinja2 import FileSystemLoader, Environment
from etherpump.commands.common import *
from time import sleep
import dateutil.parser
""" """
index: index:
@ -21,8 +27,7 @@ index:
""" """
def group (items, key=lambda x: x):
def group(items, key=lambda x: x):
""" returns a list of lists, of items grouped by a key function """ """ returns a list of lists, of items grouped by a key function """
ret = [] ret = []
keys = {} keys = {}
@ -36,33 +41,31 @@ def group(items, key=lambda x: x):
ret.append(keys[k]) ret.append(keys[k])
return ret return ret
# def base (x):
# return re.sub(r"(\.raw\.html)|(\.diff\.html)|(\.meta\.json)|(\.raw\.txt)$", "", x)
def splitextlong(x): def splitextlong (x):
""" split "long" extensions, i.e. foo.bar.baz => ('foo', '.bar.baz') """ """ split "long" extensions, i.e. foo.bar.baz => ('foo', '.bar.baz') """
m = re.search(r"^(.*?)(\..*)$", x) m = re.search(r"^(.*?)(\..*)$", x)
if m: if m:
return m.groups() return m.groups()
else: else:
return x, "" return x, ''
def base (x):
def base(x):
return splitextlong(x)[0] return splitextlong(x)[0]
def excerpt (t, chars=25):
def excerpt(t, chars=25):
if len(t) > chars: if len(t) > chars:
t = t[:chars] + "..." t = t[:chars] + "..."
return t return t
def absurl (url, base=None):
def absurl(url, base=None):
if not url.startswith("http"): if not url.startswith("http"):
return base + url return base + url
return url return url
def url_base (url):
def url_base(url):
(scheme, netloc, path, params, query, fragment) = urlparse(url) (scheme, netloc, path, params, query, fragment) = urlparse(url)
path, _ = os.path.split(path.lstrip("/")) path, _ = os.path.split(path.lstrip("/"))
ret = urlunparse((scheme, netloc, path, None, None, None)) ret = urlunparse((scheme, netloc, path, None, None, None))
@ -70,136 +73,50 @@ def url_base(url):
ret += "/" ret += "/"
return ret return ret
def datetimeformat (t, format='%Y-%m-%d %H:%M:%S'):
def datetimeformat(t, format="%Y-%m-%d %H:%M:%S"):
if type(t) == str: if type(t) == str:
dt = dateutil.parser.parse(t) dt = dateutil.parser.parse(t)
return dt.strftime(format) return dt.strftime(format)
else: else:
return time.strftime(format, time.localtime(t)) return time.strftime(format, time.localtime(t))
def main (args):
def main(args):
p = ArgumentParser("Convert dumped files to a document via a template.") p = ArgumentParser("Convert dumped files to a document via a template.")
p.add_argument("input", nargs="+", help="Files to list (.meta.json files)") p.add_argument("input", nargs="+", help="Files to list (.meta.json files)")
p.add_argument( p.add_argument("--templatepath", default=None, help="path to find templates, default: built-in")
"--templatepath", p.add_argument("--template", default="index.html", help="template name, built-ins include index.html, rss.xml; default: index.html")
default=None, p.add_argument("--padinfo", default=".etherpump/settings.json", help="settings, default: ./.etherdump/settings.json")
help="path to find templates, default: built-in",
)
p.add_argument(
"--template",
default="index.html",
help="template name, built-ins include index.html, rss.xml; default: index.html",
)
p.add_argument(
"--padinfo",
default=".etherpump/settings.json",
help="settings, default: ./.etherdump/settings.json",
)
# p.add_argument("--zerorevs", default=False, action="store_true", help="include pads with zero revisions, default: False (i.e. pads with no revisions are skipped)") # p.add_argument("--zerorevs", default=False, action="store_true", help="include pads with zero revisions, default: False (i.e. pads with no revisions are skipped)")
p.add_argument( p.add_argument("--order", default="padid", help="order, possible values: padid, pad (no group name), lastedited, (number of) authors, revisions, default: padid")
"--order", p.add_argument("--reverse", default=False, action="store_true", help="reverse order, default: False (reverse chrono)")
default="padid", p.add_argument("--limit", type=int, default=0, help="limit to number of items, default: 0 (no limit)")
help="order, possible values: padid, pad (no group name), lastedited, (number of) authors, revisions, default: padid", p.add_argument("--skip", default=None, type=int, help="skip this many items, default: None")
)
p.add_argument(
"--reverse",
default=False,
action="store_true",
help="reverse order, default: False (reverse chrono)",
)
p.add_argument(
"--limit",
type=int,
default=0,
help="limit to number of items, default: 0 (no limit)",
)
p.add_argument(
"--skip",
default=None,
type=int,
help="skip this many items, default: None",
)
p.add_argument( p.add_argument("--content", default=False, action="store_true", help="rss: include (full) content tag, default: False")
"--content", p.add_argument("--link", default="diffhtml,html,text", help="link variable will be to this version, can be comma-delim list, use first avail, default: diffhtml,html,text")
default=False, p.add_argument("--linkbase", default=None, help="base url to use for links, default: try to use the feedurl")
action="store_true",
help="rss: include (full) content tag, default: False",
)
p.add_argument(
"--link",
default="diffhtml,html,text",
help="link variable will be to this version, can be comma-delim list, use first avail, default: diffhtml,html,text",
)
p.add_argument(
"--linkbase",
default=None,
help="base url to use for links, default: try to use the feedurl",
)
p.add_argument("--output", default=None, help="output, default: stdout") p.add_argument("--output", default=None, help="output, default: stdout")
p.add_argument( p.add_argument("--files", default=False, action="store_true", help="include files (experimental)")
"--files",
default=False,
action="store_true",
help="include files (experimental)",
)
pg = p.add_argument_group("template variables") pg = p.add_argument_group('template variables')
pg.add_argument( pg.add_argument("--feedurl", default="feed.xml", help="rss: to use as feeds own (self) link, default: feed.xml")
"--feedurl", pg.add_argument("--siteurl", default=None, help="rss: to use as channel's site link, default: the etherpad url")
default="feed.xml", pg.add_argument("--title", default="etherpump", help="title for document or rss feed channel title, default: etherdump")
help="rss: to use as feeds own (self) link, default: feed.xml", pg.add_argument("--description", default="", help="rss: channel description, default: empty")
) pg.add_argument("--language", default="en-US", help="rss: feed language, default: en-US")
pg.add_argument( pg.add_argument("--updatePeriod", default="daily", help="rss: updatePeriod, possible values: hourly, daily, weekly, monthly, yearly; default: daily")
"--siteurl", pg.add_argument("--updateFrequency", default=1, type=int, help="rss: update frequency within the update period (where 2 would mean twice per period); default: 1")
default=None, pg.add_argument("--generator", default="https://gitlab.com/activearchives/etherpump", help="generator, default: https://gitlab.com/activearchives/etherdump")
help="rss: to use as channel's site link, default: the etherpad url", pg.add_argument("--timestamp", default=None, help="timestamp, default: now (e.g. 2015-12-01 12:30:00)")
)
pg.add_argument(
"--title",
default="etherpump",
help="title for document or rss feed channel title, default: etherdump",
)
pg.add_argument(
"--description",
default="",
help="rss: channel description, default: empty",
)
pg.add_argument(
"--language", default="en-US", help="rss: feed language, default: en-US"
)
pg.add_argument(
"--updatePeriod",
default="daily",
help="rss: updatePeriod, possible values: hourly, daily, weekly, monthly, yearly; default: daily",
)
pg.add_argument(
"--updateFrequency",
default=1,
type=int,
help="rss: update frequency within the update period (where 2 would mean twice per period); default: 1",
)
pg.add_argument(
"--generator",
default="https://gitlab.com/activearchives/etherpump",
help="generator, default: https://gitlab.com/activearchives/etherdump",
)
pg.add_argument(
"--timestamp",
default=None,
help="timestamp, default: now (e.g. 2015-12-01 12:30:00)",
)
pg.add_argument("--next", default=None, help="next link, default: None)") pg.add_argument("--next", default=None, help="next link, default: None)")
pg.add_argument("--prev", default=None, help="prev link, default: None") pg.add_argument("--prev", default=None, help="prev link, default: None")
args = p.parse_args(args) args = p.parse_args(args)
tmpath = args.templatepath tmpath = args.templatepath
# Default path for template is the built-in data/templates # Default path for template is the built-in data/templates
if tmpath == None: if tmpath == None:
@ -219,25 +136,28 @@ def main(args):
# Use "base" to strip (longest) extensions # Use "base" to strip (longest) extensions
# inputs = group(inputs, base) # inputs = group(inputs, base)
def wrappath(p): def wrappath (p):
path = "./{0}".format(p) path = "./{0}".format(p)
ext = os.path.splitext(p)[1][1:] ext = os.path.splitext(p)[1][1:]
return {"url": path, "path": path, "code": 200, "type": ext} return {
"url": path,
"path": path,
"code": 200,
"type": ext
}
def metaforpaths(paths): def metaforpaths (paths):
ret = {} ret = {}
pid = base(paths[0]) pid = base(paths[0])
ret["pad"] = ret["padid"] = pid ret['pad'] = ret['padid'] = pid
ret["versions"] = [wrappath(x) for x in paths] ret['versions'] = [wrappath(x) for x in paths]
lastedited = None lastedited = None
for p in paths: for p in paths:
mtime = os.stat(p).st_mtime mtime = os.stat(p).st_mtime
if lastedited == None or mtime > lastedited: if lastedited == None or mtime > lastedited:
lastedited = mtime lastedited = mtime
ret["lastedited_iso"] = datetime.fromtimestamp(lastedited).strftime( ret["lastedited_iso"] = datetime.fromtimestamp(lastedited).strftime("%Y-%m-%dT%H:%M:%S")
"%Y-%m-%dT%H:%M:%S" ret["lastedited_raw"] = mtime
)
ret["lastedited_raw"] = mtime
return ret return ret
def loadmeta(p): def loadmeta(p):
@ -256,32 +176,28 @@ def main(args):
# else: # else:
# return metaforpaths(paths) # return metaforpaths(paths)
def fixdates(padmeta): def fixdates (padmeta):
d = dateutil.parser.parse(padmeta["lastedited_iso"]) d = dateutil.parser.parse(padmeta["lastedited_iso"])
padmeta["lastedited"] = d padmeta["lastedited"] = d
padmeta["lastedited_822"] = d.strftime("%a, %d %b %Y %H:%M:%S +0000") padmeta["lastedited_822"] = d.strftime("%a, %d %b %Y %H:%M:%S +0000")
return padmeta return padmeta
pads = list(map(loadmeta, inputs)) pads = map(loadmeta, inputs)
pads = [x for x in pads if x != None] pads = [x for x in pads if x != None]
pads = list(map(fixdates, pads)) pads = map(fixdates, pads)
args.pads = list(pads) args.pads = list(pads)
def could_have_base(x, y): def could_have_base (x, y):
return x == y or (x.startswith(y) and x[len(y) :].startswith(".")) return x == y or (x.startswith(y) and x[len(y):].startswith("."))
def get_best_pad(x): def get_best_pad (x):
for pb in padbases: for pb in padbases:
p = pads_by_base[pb] p = pads_by_base[pb]
if could_have_base(x, pb): if could_have_base(x, pb):
return p return p
def has_version(padinfo, path): def has_version (padinfo, path):
return [ return [x for x in padinfo['versions'] if 'path' in x and x['path'] == "./"+path]
x
for x in padinfo["versions"]
if "path" in x and x["path"] == "./" + path
]
if args.files: if args.files:
inputs = args.input inputs = args.input
@ -291,7 +207,7 @@ def main(args):
pads_by_base = {} pads_by_base = {}
for p in args.pads: for p in args.pads:
# print ("Trying padid", p['padid'], file=sys.stderr) # print ("Trying padid", p['padid'], file=sys.stderr)
padbase = os.path.splitext(p["padid"])[0] padbase = os.path.splitext(p['padid'])[0]
pads_by_base[padbase] = p pads_by_base[padbase] = p
padbases = list(pads_by_base.keys()) padbases = list(pads_by_base.keys())
# SORT THEM LONGEST FIRST TO ensure that LONGEST MATCHES MATCH # SORT THEM LONGEST FIRST TO ensure that LONGEST MATCHES MATCH
@ -299,33 +215,25 @@ def main(args):
# print ("PADBASES", file=sys.stderr) # print ("PADBASES", file=sys.stderr)
# for pb in padbases: # for pb in padbases:
# print (" ", pb, file=sys.stderr) # print (" ", pb, file=sys.stderr)
print("pairing input files with pads", file=sys.stderr) print ("pairing input files with pads", file=sys.stderr)
for x in inputs: for x in inputs:
# pair input with a pad if possible # pair input with a pad if possible
xbasename = os.path.basename(x) xbasename = os.path.basename(x)
p = get_best_pad(xbasename) p = get_best_pad(xbasename)
if p: if p:
if not has_version(p, x): if not has_version(p, x):
print( print ("Grouping file {0} with pad {1}".format(x, p['padid']), file=sys.stderr)
"Grouping file {0} with pad {1}".format(x, p["padid"]), p['versions'].append(wrappath(x))
file=sys.stderr,
)
p["versions"].append(wrappath(x))
else: else:
print( print ("Skipping existing version {0} ({1})...".format(x, p['padid']), file=sys.stderr)
"Skipping existing version {0} ({1})...".format(
x, p["padid"]
),
file=sys.stderr,
)
removelist.append(x) removelist.append(x)
# Removed Matches files # Removed Matches files
for x in removelist: for x in removelist:
inputs.remove(x) inputs.remove(x)
print("Remaining files:", file=sys.stderr) print ("Remaining files:", file=sys.stderr)
for x in inputs: for x in inputs:
print(x, file=sys.stderr) print (x, file=sys.stderr)
print(file=sys.stderr) print (file=sys.stderr)
# Add "fake" pads for remaining files # Add "fake" pads for remaining files
for x in inputs: for x in inputs:
args.pads.append(metaforpaths([x])) args.pads.append(metaforpaths([x]))
@ -334,14 +242,14 @@ def main(args):
args.timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") args.timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
padurlbase = re.sub(r"api/1.2.9/$", "p/", info["apiurl"]) padurlbase = re.sub(r"api/1.2.9/$", "p/", info["apiurl"])
# if type(padurlbase) == unicode:
# padurlbase = padurlbase.encode("utf-8")
args.siteurl = args.siteurl or padurlbase args.siteurl = args.siteurl or padurlbase
args.utcnow = datetime.utcnow().strftime("%a, %d %b %Y %H:%M:%S +0000") args.utcnow = datetime.utcnow().strftime("%a, %d %b %Y %H:%M:%S +0000")
# order items & apply limit # order items & apply limit
if args.order == "lastedited": if args.order == "lastedited":
args.pads.sort( args.pads.sort(key=lambda x: x.get("lastedited_iso"), reverse=args.reverse)
key=lambda x: x.get("lastedited_iso"), reverse=args.reverse
)
elif args.order == "pad": elif args.order == "pad":
args.pads.sort(key=lambda x: x.get("pad"), reverse=args.reverse) args.pads.sort(key=lambda x: x.get("pad"), reverse=args.reverse)
elif args.order == "padid": elif args.order == "padid":
@ -349,14 +257,12 @@ def main(args):
elif args.order == "revisions": elif args.order == "revisions":
args.pads.sort(key=lambda x: x.get("revisions"), reverse=args.reverse) args.pads.sort(key=lambda x: x.get("revisions"), reverse=args.reverse)
elif args.order == "authors": elif args.order == "authors":
args.pads.sort( args.pads.sort(key=lambda x: len(x.get("authors")), reverse=args.reverse)
key=lambda x: len(x.get("authors")), reverse=args.reverse
)
else: else:
raise Exception("That ordering is not implemented!") raise Exception("That ordering is not implemented!")
if args.limit: if args.limit:
args.pads = args.pads[: args.limit] args.pads = args.pads[:args.limit]
# add versions_by_type, add in full text # add versions_by_type, add in full text
# add link (based on args.link) # add link (based on args.link)
@ -373,10 +279,10 @@ def main(args):
if "text" in versions_by_type: if "text" in versions_by_type:
try: try:
with open(versions_by_type["text"]["path"]) as f: with open (versions_by_type["text"]["path"]) as f:
p["text"] = f.read() p["text"] = f.read()
except FileNotFoundError: except FileNotFoundError:
p["text"] = "" p['text'] = ''
# ADD IN LINK TO PAD AS "link" # ADD IN LINK TO PAD AS "link"
for v in linkversions: for v in linkversions:
if v in versions_by_type: if v in versions_by_type:
@ -390,6 +296,6 @@ def main(args):
if args.output: if args.output:
with open(args.output, "w") as f: with open(args.output, "w") as f:
print(template.render(vars(args)), file=f) print (template.render(vars(args)), file=f)
else: else:
print(template.render(vars(args))) print (template.render(vars(args)))

View File

@ -1,20 +1,27 @@
"""Initialize an etherpump folder""" from __future__ import print_function
import json
import os
import sys
from argparse import ArgumentParser from argparse import ArgumentParser
from urllib.parse import urlencode, urlparse, urlunparse
from urllib.request import HTTPError, URLError, urlopen
try:
# python2
from urlparse import urlparse, urlunparse
from urllib2 import urlopen, URLError, HTTPError
from urllib import urlencode
input = raw_input
except ImportError:
# python3
from urllib.parse import urlparse, urlunparse, urlencode
from urllib.request import urlopen, URLError, HTTPError
import json, os, sys
def get_api(url, cmd=None, data=None, verbose=False): def get_api(url, cmd=None, data=None, verbose=False):
try: try:
useurl = url + cmd useurl = url+cmd
if data: if data:
useurl += "?" + urlencode(data) useurl += "?"+urlencode(data)
# data['apikey'] = "7c8faa070c97f83d8f705c935a32d5141f89cbaa2158042fa92e8ddad5dbc5e1"
if verbose: if verbose:
print("trying", useurl, file=sys.stderr) print ("trying", useurl, file=sys.stderr)
resp = urlopen(useurl).read() resp = urlopen(useurl).read()
resp = resp.decode("utf-8") resp = resp.decode("utf-8")
resp = json.loads(resp) resp = json.loads(resp)
@ -22,17 +29,20 @@ def get_api(url, cmd=None, data=None, verbose=False):
return resp return resp
except ValueError as e: except ValueError as e:
if verbose: if verbose:
print(" ValueError", e, file=sys.stderr) print (" ValueError", e, file=sys.stderr)
return return
except HTTPError as e: except HTTPError as e:
if verbose: if verbose:
print(" HTTPError", e, file=sys.stderr) print (" HTTPError", e, file=sys.stderr)
if e.code == 401: if e.code == 401:
# Unauthorized is how the API responds to an incorrect API key # Unauthorized is how the API responds to an incorrect API key
return {"code": 401, "message": e} return {"code": 401, "message": e}
# resp = json.load(e)
# if "code" in resp and "message" in resp:
# # print ("returning", resp, file=sys.stderr)
# return resp
def tryapiurl (url, verbose=False):
def tryapiurl(url, verbose=False):
""" """
Try to use url as api, correcting if possible. Try to use url as api, correcting if possible.
Returns corrected / normalized URL, or None if not possible Returns corrected / normalized URL, or None if not possible
@ -41,32 +51,26 @@ def tryapiurl(url, verbose=False):
scheme, netloc, path, params, query, fragment = urlparse(url) scheme, netloc, path, params, query, fragment = urlparse(url)
if scheme == "": if scheme == "":
url = "http://" + url url = "http://" + url
scheme, netloc, path, params, query, fragment = urlparse(url) scheme, netloc, path, params, query, fragment = urlparse(url)
params, query, fragment = ("", "", "") params, query, fragment = ("", "", "")
path = path.strip("/") path = path.strip("/")
# 1. try directly... # 1. try directly...
apiurl = ( apiurl = urlunparse((scheme, netloc, path, params, query, fragment))+"/"
urlunparse((scheme, netloc, path, params, query, fragment)) + "/"
)
if get_api(apiurl, "listAllPads", verbose=verbose): if get_api(apiurl, "listAllPads", verbose=verbose):
return apiurl return apiurl
# 2. try with += api/1.2.9 # 2. try with += api/1.2.9
path = os.path.join(path, "api", "1.2.9") + "/" path = os.path.join(path, "api", "1.2.9")+"/"
apiurl = urlunparse((scheme, netloc, path, params, query, fragment)) apiurl = urlunparse((scheme, netloc, path, params, query, fragment))
if get_api(apiurl, "listAllPads", verbose=verbose): if get_api(apiurl, "listAllPads", verbose=verbose):
return apiurl return apiurl
# except ValueError as e:
# print ("ValueError", e, file=sys.stderr)
except URLError as e: except URLError as e:
print("URLError", e, file=sys.stderr) print ("URLError", e, file=sys.stderr)
def main(args): def main(args):
p = ArgumentParser("initialize an etherpump folder") p = ArgumentParser("initialize an etherpump folder")
p.add_argument( p.add_argument("arg", nargs="*", default=[], help="optional positional args: path etherpadurl")
"arg",
nargs="*",
default=[],
help="optional positional args: path etherpadurl",
)
p.add_argument("--path", default=None, help="path to initialize") p.add_argument("--path", default=None, help="path to initialize")
p.add_argument("--padurl", default=None, help="") p.add_argument("--padurl", default=None, help="")
p.add_argument("--apikey", default=None, help="") p.add_argument("--apikey", default=None, help="")
@ -74,6 +78,7 @@ def main(args):
p.add_argument("--reinit", default=False, action="store_true", help="") p.add_argument("--reinit", default=False, action="store_true", help="")
args = p.parse_args(args) args = p.parse_args(args)
path = args.path path = args.path
if path == None and len(args.arg): if path == None and len(args.arg):
path = args.arg[0] path = args.arg[0]
@ -92,7 +97,7 @@ def main(args):
with open(padinfopath) as f: with open(padinfopath) as f:
padinfo = json.load(f) padinfo = json.load(f)
if not args.reinit: if not args.reinit:
print("Folder already initialized. Use --reinit to reset settings") print ("Folder is already initialized. Use --reinit to reset settings.")
sys.exit(0) sys.exit(0)
except IOError: except IOError:
pass pass
@ -103,29 +108,22 @@ def main(args):
apiurl = args.padurl apiurl = args.padurl
while True: while True:
if apiurl: if apiurl:
apiurl = tryapiurl(apiurl, verbose=args.verbose) apiurl = tryapiurl(apiurl,verbose=args.verbose)
if apiurl: if apiurl:
# print ("Got APIURL: {0}".format(apiurl)) # print ("Got APIURL: {0}".format(apiurl))
break break
apiurl = input( apiurl = input("Please type the URL of the etherpad: ").strip()
"Please type the URL of the etherpad (e.g. https://pad.vvvvvvaria.org): "
).strip()
padinfo["apiurl"] = apiurl padinfo["apiurl"] = apiurl
apikey = args.apikey apikey = args.apikey
while True: while True:
if apikey: if apikey:
resp = get_api( resp = get_api(apiurl, "listAllPads", {"apikey": apikey}, verbose=args.verbose)
apiurl, "listAllPads", {"apikey": apikey}, verbose=args.verbose
)
if resp and resp["code"] == 0: if resp and resp["code"] == 0:
# print ("GOOD") # print ("GOOD")
break break
else: else:
print("bad") print ("bad")
print( print ("The APIKEY is the contents of the file APIKEY.txt in the etherpad folder", file=sys.stderr)
"The APIKEY is the contents of the file APIKEY.txt in the etherpad folder",
file=sys.stderr,
)
apikey = input("Please paste the APIKEY: ").strip() apikey = input("Please paste the APIKEY: ").strip()
padinfo["apikey"] = apikey padinfo["apikey"] = apikey

View File

@ -1,13 +1,10 @@
import json from __future__ import print_function
import os
import re
from argparse import ArgumentParser from argparse import ArgumentParser
from urllib.error import HTTPError, URLError import json, os, re
from urllib.parse import urlencode from urllib import urlencode
from urllib.request import urlopen from urllib2 import urlopen, HTTPError, URLError
def group (items, key=lambda x: x):
def group(items, key=lambda x: x):
ret = [] ret = []
keys = {} keys = {}
for item in items: for item in items:
@ -20,7 +17,6 @@ def group(items, key=lambda x: x):
ret.append(keys[k]) ret.append(keys[k])
return ret return ret
def main(args): def main(args):
p = ArgumentParser("") p = ArgumentParser("")
p.add_argument("input", nargs="+", help="filenames") p.add_argument("input", nargs="+", help="filenames")
@ -31,11 +27,10 @@ def main(args):
inputs = [x for x in inputs if not os.path.isdir(x)] inputs = [x for x in inputs if not os.path.isdir(x)]
def base(x): def base (x):
return re.sub(r"(\.html)|(\.diff\.html)|(\.meta\.json)|(\.txt)$", "", x) return re.sub(r"(\.html)|(\.diff\.html)|(\.meta\.json)|(\.txt)$", "", x)
#from pprint import pprint
# from pprint import pprint #pprint()
# pprint()
gg = group(inputs, base) gg = group(inputs, base)
for items in gg: for items in gg:
itembase = base(items[0]) itembase = base(items[0])
@ -45,5 +40,5 @@ def main(args):
pass pass
for i in items: for i in items:
newloc = os.path.join(itembase, i) newloc = os.path.join(itembase, i)
print("'{0}' => '{1}'".format(i, newloc)) print ("'{0}' => '{1}'".format(i, newloc))
os.rename(i, newloc) os.rename(i, newloc)

View File

@ -1,42 +1,40 @@
"""Call listAllPads and print the results""" from __future__ import print_function
from argparse import ArgumentParser
import json import json
import sys import sys
from argparse import ArgumentParser
from urllib.parse import urlencode, urlparse, urlunparse
from urllib.request import HTTPError, URLError, urlopen
from etherpump.commands.common import getjson from etherpump.commands.common import getjson
try:
# python2
from urlparse import urlparse, urlunparse
from urllib2 import urlopen, URLError, HTTPError
from urllib import urlencode
input = raw_input
except ImportError:
# python3
from urllib.parse import urlparse, urlunparse, urlencode
from urllib.request import urlopen, URLError, HTTPError
def main (args):
def main(args):
p = ArgumentParser("call listAllPads and print the results") p = ArgumentParser("call listAllPads and print the results")
p.add_argument( p.add_argument("--padinfo", default=".etherpump/settings.json", help="settings, default: .etherdump/settings.json")
"--padinfo",
default=".etherpump/settings.json",
help="settings, default: .etherdump/settings.json",
)
p.add_argument("--showurl", default=False, action="store_true") p.add_argument("--showurl", default=False, action="store_true")
p.add_argument( p.add_argument("--format", default="lines", help="output format: lines, json; default lines")
"--format",
default="lines",
help="output format: lines, json; default lines",
)
args = p.parse_args(args) args = p.parse_args(args)
with open(args.padinfo) as f: with open(args.padinfo) as f:
info = json.load(f) info = json.load(f)
apiurl = info.get("apiurl") apiurl = info.get("apiurl")
# apiurl = {0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info) # apiurl = {0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
data = {} data = {}
data["apikey"] = info["apikey"] data['apikey'] = info['apikey']
requesturl = apiurl + "listAllPads?" + urlencode(data) requesturl = apiurl+'listAllPads?'+urlencode(data)
if args.showurl: if args.showurl:
print(requesturl) print (requesturl)
else: else:
results = getjson(requesturl)["data"]["padIDs"] results = getjson(requesturl)['data']['padIDs']
if args.format == "json": if args.format == "json":
print(json.dumps(results)) print (json.dumps(results))
else: else:
for r in results: for r in results:
print(r) print (r)

View File

@ -1,40 +1,31 @@
"""Call listAuthorsOfPad for the padid""" from __future__ import print_function
import json
from argparse import ArgumentParser from argparse import ArgumentParser
from urllib.parse import urlencode import json
from urllib.request import urlopen from urllib import urlencode
from urllib2 import urlopen, HTTPError, URLError
def main(args): def main(args):
p = ArgumentParser("call listAuthorsOfPad for the padid") p = ArgumentParser("call listAuthorsOfPad for the padid")
p.add_argument("padid", help="the padid") p.add_argument("padid", help="the padid")
p.add_argument( p.add_argument("--padinfo", default=".etherpump/settings.json", help="settings, default: .etherdump/settings.json")
"--padinfo",
default=".etherpump/settings.json",
help="settings, default: .etherdump/settings.json",
)
p.add_argument("--showurl", default=False, action="store_true") p.add_argument("--showurl", default=False, action="store_true")
p.add_argument( p.add_argument("--format", default="lines", help="output format, can be: lines, json; default: lines")
"--format",
default="lines",
help="output format, can be: lines, json; default: lines",
)
args = p.parse_args(args) args = p.parse_args(args)
with open(args.padinfo) as f: with open(args.padinfo) as f:
info = json.load(f) info = json.load(f)
apiurl = info.get("apiurl") apiurl = info.get("apiurl")
data = {} data = {}
data["apikey"] = info["apikey"] data['apikey'] = info['apikey']
data["padID"] = args.padid data['padID'] = args.padid.encode("utf-8")
requesturl = apiurl + "listAuthorsOfPad?" + urlencode(data) requesturl = apiurl+'listAuthorsOfPad?'+urlencode(data)
if args.showurl: if args.showurl:
print(requesturl) print (requesturl)
else: else:
results = json.load(urlopen(requesturl))["data"]["authorIDs"] results = json.load(urlopen(requesturl))['data']['authorIDs']
if args.format == "json": if args.format == "json":
print(json.dumps(results)) print (json.dumps(results))
else: else:
for r in results: for r in results:
print(r) print (r.encode("utf-8"))

View File

@ -1,29 +1,34 @@
"""Generate a single document from etherpumps using a template""" from __future__ import print_function
import json
import os
import re
import sys
import time
from argparse import ArgumentParser from argparse import ArgumentParser
import sys, json, re, os, time
from datetime import datetime from datetime import datetime
from urllib.parse import urlparse, urlunparse
import dateutil.parser import dateutil.parser
import pypandoc import pypandoc
from jinja2 import Environment, FileSystemLoader
from etherpump.commands.common import * # noqa try:
# python2
from urllib2 import urlopen, URLError, HTTPError
from urllib import urlencode
from urlparse import urlparse, urlunparse
except ImportError:
# python3
from urllib.parse import urlparse, urlunparse, urlencode, quote
from urllib.request import urlopen, URLError, HTTPError
from jinja2 import FileSystemLoader, Environment
from etherpump.commands.common import *
from time import sleep
import dateutil.parser
""" """
publication: publication:
Generate a single document from etherpumps using a template. Generate a single document from etherpumps using a template.
Built-in templates: publication.html Built-in templates: publication.html
""" """
def group (items, key=lambda x: x):
def group(items, key=lambda x: x):
""" returns a list of lists, of items grouped by a key function """ """ returns a list of lists, of items grouped by a key function """
ret = [] ret = []
keys = {} keys = {}
@ -37,37 +42,31 @@ def group(items, key=lambda x: x):
ret.append(keys[k]) ret.append(keys[k])
return ret return ret
# def base (x): # def base (x):
# return re.sub(r"(\.raw\.html)|(\.diff\.html)|(\.meta\.json)|(\.raw\.txt)$", "", x) # return re.sub(r"(\.raw\.html)|(\.diff\.html)|(\.meta\.json)|(\.raw\.txt)$", "", x)
def splitextlong (x):
def splitextlong(x):
""" split "long" extensions, i.e. foo.bar.baz => ('foo', '.bar.baz') """ """ split "long" extensions, i.e. foo.bar.baz => ('foo', '.bar.baz') """
m = re.search(r"^(.*?)(\..*)$", x) m = re.search(r"^(.*?)(\..*)$", x)
if m: if m:
return m.groups() return m.groups()
else: else:
return x, "" return x, ''
def base (x):
def base(x):
return splitextlong(x)[0] return splitextlong(x)[0]
def excerpt (t, chars=25):
def excerpt(t, chars=25):
if len(t) > chars: if len(t) > chars:
t = t[:chars] + "..." t = t[:chars] + "..."
return t return t
def absurl (url, base=None):
def absurl(url, base=None):
if not url.startswith("http"): if not url.startswith("http"):
return base + url return base + url
return url return url
def url_base (url):
def url_base(url):
(scheme, netloc, path, params, query, fragment) = urlparse(url) (scheme, netloc, path, params, query, fragment) = urlparse(url)
path, _ = os.path.split(path.lstrip("/")) path, _ = os.path.split(path.lstrip("/"))
ret = urlunparse((scheme, netloc, path, None, None, None)) ret = urlunparse((scheme, netloc, path, None, None, None))
@ -75,136 +74,50 @@ def url_base(url):
ret += "/" ret += "/"
return ret return ret
def datetimeformat (t, format='%Y-%m-%d %H:%M:%S'):
def datetimeformat(t, format="%Y-%m-%d %H:%M:%S"):
if type(t) == str: if type(t) == str:
dt = dateutil.parser.parse(t) dt = dateutil.parser.parse(t)
return dt.strftime(format) return dt.strftime(format)
else: else:
return time.strftime(format, time.localtime(t)) return time.strftime(format, time.localtime(t))
def main (args):
def main(args):
p = ArgumentParser("Convert dumped files to a document via a template.") p = ArgumentParser("Convert dumped files to a document via a template.")
p.add_argument("input", nargs="+", help="Files to list (.meta.json files)") p.add_argument("input", nargs="+", help="Files to list (.meta.json files)")
p.add_argument( p.add_argument("--templatepath", default=None, help="path to find templates, default: built-in")
"--templatepath", p.add_argument("--template", default="publication.html", help="template name, built-ins include publication.html; default: publication.html")
default=None, p.add_argument("--padinfo", default=".etherpump/settings.json", help="settings, default: ./.etherdump/settings.json")
help="path to find templates, default: built-in",
)
p.add_argument(
"--template",
default="publication.html",
help="template name, built-ins include publication.html; default: publication.html",
)
p.add_argument(
"--padinfo",
default=".etherpump/settings.json",
help="settings, default: ./.etherdump/settings.json",
)
# p.add_argument("--zerorevs", default=False, action="store_true", help="include pads with zero revisions, default: False (i.e. pads with no revisions are skipped)") # p.add_argument("--zerorevs", default=False, action="store_true", help="include pads with zero revisions, default: False (i.e. pads with no revisions are skipped)")
p.add_argument( p.add_argument("--order", default="padid", help="order, possible values: padid, pad (no group name), lastedited, (number of) authors, revisions, default: padid")
"--order", p.add_argument("--reverse", default=False, action="store_true", help="reverse order, default: False (reverse chrono)")
default="padid", p.add_argument("--limit", type=int, default=0, help="limit to number of items, default: 0 (no limit)")
help="order, possible values: padid, pad (no group name), lastedited, (number of) authors, revisions, default: padid", p.add_argument("--skip", default=None, type=int, help="skip this many items, default: None")
)
p.add_argument(
"--reverse",
default=False,
action="store_true",
help="reverse order, default: False (reverse chrono)",
)
p.add_argument(
"--limit",
type=int,
default=0,
help="limit to number of items, default: 0 (no limit)",
)
p.add_argument(
"--skip",
default=None,
type=int,
help="skip this many items, default: None",
)
p.add_argument( p.add_argument("--content", default=False, action="store_true", help="rss: include (full) content tag, default: False")
"--content", p.add_argument("--link", default="diffhtml,html,text", help="link variable will be to this version, can be comma-delim list, use first avail, default: diffhtml,html,text")
default=False, p.add_argument("--linkbase", default=None, help="base url to use for links, default: try to use the feedurl")
action="store_true",
help="rss: include (full) content tag, default: False",
)
p.add_argument(
"--link",
default="diffhtml,html,text",
help="link variable will be to this version, can be comma-delim list, use first avail, default: diffhtml,html,text",
)
p.add_argument(
"--linkbase",
default=None,
help="base url to use for links, default: try to use the feedurl",
)
p.add_argument("--output", default=None, help="output, default: stdout") p.add_argument("--output", default=None, help="output, default: stdout")
p.add_argument( p.add_argument("--files", default=False, action="store_true", help="include files (experimental)")
"--files",
default=False,
action="store_true",
help="include files (experimental)",
)
pg = p.add_argument_group("template variables") pg = p.add_argument_group('template variables')
pg.add_argument( pg.add_argument("--feedurl", default="feed.xml", help="rss: to use as feeds own (self) link, default: feed.xml")
"--feedurl", pg.add_argument("--siteurl", default=None, help="rss: to use as channel's site link, default: the etherpad url")
default="feed.xml", pg.add_argument("--title", default="etherpump", help="title for document or rss feed channel title, default: etherdump")
help="rss: to use as feeds own (self) link, default: feed.xml", pg.add_argument("--description", default="", help="rss: channel description, default: empty")
) pg.add_argument("--language", default="en-US", help="rss: feed language, default: en-US")
pg.add_argument( pg.add_argument("--updatePeriod", default="daily", help="rss: updatePeriod, possible values: hourly, daily, weekly, monthly, yearly; default: daily")
"--siteurl", pg.add_argument("--updateFrequency", default=1, type=int, help="rss: update frequency within the update period (where 2 would mean twice per period); default: 1")
default=None, pg.add_argument("--generator", default="https://gitlab.com/activearchives/etherpump", help="generator, default: https://gitlab.com/activearchives/etherdump")
help="rss: to use as channel's site link, default: the etherpad url", pg.add_argument("--timestamp", default=None, help="timestamp, default: now (e.g. 2015-12-01 12:30:00)")
)
pg.add_argument(
"--title",
default="etherpump",
help="title for document or rss feed channel title, default: etherdump",
)
pg.add_argument(
"--description",
default="",
help="rss: channel description, default: empty",
)
pg.add_argument(
"--language", default="en-US", help="rss: feed language, default: en-US"
)
pg.add_argument(
"--updatePeriod",
default="daily",
help="rss: updatePeriod, possible values: hourly, daily, weekly, monthly, yearly; default: daily",
)
pg.add_argument(
"--updateFrequency",
default=1,
type=int,
help="rss: update frequency within the update period (where 2 would mean twice per period); default: 1",
)
pg.add_argument(
"--generator",
default="https://git.vvvvvvaria.org/varia/etherpump",
help="generator, default: https://git.vvvvvvaria.org/varia/etherdump",
)
pg.add_argument(
"--timestamp",
default=None,
help="timestamp, default: now (e.g. 2015-12-01 12:30:00)",
)
pg.add_argument("--next", default=None, help="next link, default: None)") pg.add_argument("--next", default=None, help="next link, default: None)")
pg.add_argument("--prev", default=None, help="prev link, default: None") pg.add_argument("--prev", default=None, help="prev link, default: None")
args = p.parse_args(args) args = p.parse_args(args)
tmpath = args.templatepath tmpath = args.templatepath
# Default path for template is the built-in data/templates # Default path for template is the built-in data/templates
if tmpath == None: if tmpath == None:
@ -221,29 +134,31 @@ def main(args):
inputs = args.input inputs = args.input
inputs.sort() inputs.sort()
# Use "base" to strip (longest) extensions # Use "base" to strip (longest) extensions
# inputs = group(inputs, base) # inputs = group(inputs, base)
def wrappath(p): def wrappath (p):
path = "./{0}".format(p) path = "./{0}".format(p)
ext = os.path.splitext(p)[1][1:] ext = os.path.splitext(p)[1][1:]
return {"url": path, "path": path, "code": 200, "type": ext} return {
"url": path,
"path": path,
"code": 200,
"type": ext
}
def metaforpaths(paths): def metaforpaths (paths):
ret = {} ret = {}
pid = base(paths[0]) pid = base(paths[0])
ret["pad"] = ret["padid"] = pid ret['pad'] = ret['padid'] = pid
ret["versions"] = [wrappath(x) for x in paths] ret['versions'] = [wrappath(x) for x in paths]
lastedited = None lastedited = None
for p in paths: for p in paths:
mtime = os.stat(p).st_mtime mtime = os.stat(p).st_mtime
if lastedited == None or mtime > lastedited: if lastedited == None or mtime > lastedited:
lastedited = mtime lastedited = mtime
ret["lastedited_iso"] = datetime.fromtimestamp(lastedited).strftime( ret["lastedited_iso"] = datetime.fromtimestamp(lastedited).strftime("%Y-%m-%dT%H:%M:%S")
"%Y-%m-%dT%H:%M:%S" ret["lastedited_raw"] = mtime
)
ret["lastedited_raw"] = mtime
return ret return ret
def loadmeta(p): def loadmeta(p):
@ -252,7 +167,7 @@ def main(args):
if p.endswith(".meta.json"): if p.endswith(".meta.json"):
with open(p) as f: with open(p) as f:
return json.load(f) return json.load(f)
# if there is a .meta.json, load it & MERGE with other files # # IF there is a .meta.json, load it & MERGE with other files
# if ret: # if ret:
# # TODO: merge with other files # # TODO: merge with other files
# for p in paths: # for p in paths:
@ -262,32 +177,28 @@ def main(args):
# else: # else:
# return metaforpaths(paths) # return metaforpaths(paths)
def fixdates(padmeta): def fixdates (padmeta):
d = dateutil.parser.parse(padmeta["lastedited_iso"]) d = dateutil.parser.parse(padmeta["lastedited_iso"])
padmeta["lastedited"] = d padmeta["lastedited"] = d
padmeta["lastedited_822"] = d.strftime("%a, %d %b %Y %H:%M:%S +0000") padmeta["lastedited_822"] = d.strftime("%a, %d %b %Y %H:%M:%S +0000")
return padmeta return padmeta
pads = list(map(loadmeta, inputs)) pads = map(loadmeta, inputs)
pads = [x for x in pads if x != None] pads = [x for x in pads if x != None]
pads = list(map(fixdates, pads)) pads = map(fixdates, pads)
args.pads = list(pads) args.pads = list(pads)
def could_have_base(x, y): def could_have_base (x, y):
return x == y or (x.startswith(y) and x[len(y) :].startswith(".")) return x == y or (x.startswith(y) and x[len(y):].startswith("."))
def get_best_pad(x): def get_best_pad (x):
for pb in padbases: for pb in padbases:
p = pads_by_base[pb] p = pads_by_base[pb]
if could_have_base(x, pb): if could_have_base(x, pb):
return p return p
def has_version(padinfo, path): def has_version (padinfo, path):
return [ return [x for x in padinfo['versions'] if 'path' in x and x['path'] == "./"+path]
x
for x in padinfo["versions"]
if "path" in x and x["path"] == "./" + path
]
if args.files: if args.files:
inputs = args.input inputs = args.input
@ -297,7 +208,7 @@ def main(args):
pads_by_base = {} pads_by_base = {}
for p in args.pads: for p in args.pads:
# print ("Trying padid", p['padid'], file=sys.stderr) # print ("Trying padid", p['padid'], file=sys.stderr)
padbase = os.path.splitext(p["padid"])[0] padbase = os.path.splitext(p['padid'])[0]
pads_by_base[padbase] = p pads_by_base[padbase] = p
padbases = list(pads_by_base.keys()) padbases = list(pads_by_base.keys())
# SORT THEM LONGEST FIRST TO ensure that LONGEST MATCHES MATCH # SORT THEM LONGEST FIRST TO ensure that LONGEST MATCHES MATCH
@ -305,33 +216,25 @@ def main(args):
# print ("PADBASES", file=sys.stderr) # print ("PADBASES", file=sys.stderr)
# for pb in padbases: # for pb in padbases:
# print (" ", pb, file=sys.stderr) # print (" ", pb, file=sys.stderr)
print("pairing input files with pads", file=sys.stderr) print ("pairing input files with pads", file=sys.stderr)
for x in inputs: for x in inputs:
# pair input with a pad if possible # pair input with a pad if possible
xbasename = os.path.basename(x) xbasename = os.path.basename(x)
p = get_best_pad(xbasename) p = get_best_pad(xbasename)
if p: if p:
if not has_version(p, x): if not has_version(p, x):
print( print ("Grouping file {0} with pad {1}".format(x, p['padid']), file=sys.stderr)
"Grouping file {0} with pad {1}".format(x, p["padid"]), p['versions'].append(wrappath(x))
file=sys.stderr,
)
p["versions"].append(wrappath(x))
else: else:
print( print ("Skipping existing version {0} ({1})...".format(x, p['padid']), file=sys.stderr)
"Skipping existing version {0} ({1})...".format(
x, p["padid"]
),
file=sys.stderr,
)
removelist.append(x) removelist.append(x)
# Removed Matches files # Removed Matches files
for x in removelist: for x in removelist:
inputs.remove(x) inputs.remove(x)
print("Remaining files:", file=sys.stderr) print ("Remaining files:", file=sys.stderr)
for x in inputs: for x in inputs:
print(x, file=sys.stderr) print (x, file=sys.stderr)
print(file=sys.stderr) print (file=sys.stderr)
# Add "fake" pads for remaining files # Add "fake" pads for remaining files
for x in inputs: for x in inputs:
args.pads.append(metaforpaths([x])) args.pads.append(metaforpaths([x]))
@ -340,14 +243,14 @@ def main(args):
args.timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") args.timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
padurlbase = re.sub(r"api/1.2.9/$", "p/", info["apiurl"]) padurlbase = re.sub(r"api/1.2.9/$", "p/", info["apiurl"])
# if type(padurlbase) == unicode:
# padurlbase = padurlbase.encode("utf-8")
args.siteurl = args.siteurl or padurlbase args.siteurl = args.siteurl or padurlbase
args.utcnow = datetime.utcnow().strftime("%a, %d %b %Y %H:%M:%S +0000") args.utcnow = datetime.utcnow().strftime("%a, %d %b %Y %H:%M:%S +0000")
# order items & apply limit # order items & apply limit
if args.order == "lastedited": if args.order == "lastedited":
args.pads.sort( args.pads.sort(key=lambda x: x.get("lastedited_iso"), reverse=args.reverse)
key=lambda x: x.get("lastedited_iso"), reverse=args.reverse
)
elif args.order == "pad": elif args.order == "pad":
args.pads.sort(key=lambda x: x.get("pad"), reverse=args.reverse) args.pads.sort(key=lambda x: x.get("pad"), reverse=args.reverse)
elif args.order == "padid": elif args.order == "padid":
@ -355,20 +258,17 @@ def main(args):
elif args.order == "revisions": elif args.order == "revisions":
args.pads.sort(key=lambda x: x.get("revisions"), reverse=args.reverse) args.pads.sort(key=lambda x: x.get("revisions"), reverse=args.reverse)
elif args.order == "authors": elif args.order == "authors":
args.pads.sort( args.pads.sort(key=lambda x: len(x.get("authors")), reverse=args.reverse)
key=lambda x: len(x.get("authors")), reverse=args.reverse
)
elif args.order == "custom": elif args.order == "custom":
# TODO: make this list non-static, but a variable that can be given from the CLI # TODO: make this list non-static, but a variable that can be given from the CLI
customorder = [ customorder = [
"nooo.relearn.preamble", 'nooo.relearn.preamble',
"nooo.relearn.activating.the.archive", 'nooo.relearn.activating.the.archive',
"nooo.relearn.call.for.proposals", 'nooo.relearn.call.for.proposals',
"nooo.relearn.call.for.proposals-proposal-footnote", 'nooo.relearn.call.for.proposals-proposal-footnote',
"nooo.relearn.colophon", 'nooo.relearn.colophon']
]
order = [] order = []
for x in customorder: for x in customorder:
for pad in args.pads: for pad in args.pads:
@ -379,7 +279,7 @@ def main(args):
raise Exception("That ordering is not implemented!") raise Exception("That ordering is not implemented!")
if args.limit: if args.limit:
args.pads = args.pads[: args.limit] args.pads = args.pads[:args.limit]
# add versions_by_type, add in full text # add versions_by_type, add in full text
# add link (based on args.link) # add link (based on args.link)
@ -396,15 +296,15 @@ def main(args):
if "text" in versions_by_type: if "text" in versions_by_type:
# try: # try:
with open(versions_by_type["text"]["path"]) as f: with open (versions_by_type["text"]["path"]) as f:
content = f.read() content = f.read()
# print('content:', content) # print('content:', content)
# [Relearn] Add pandoc command here? # [Relearn] Add pandoc command here?
html = pypandoc.convert_text(content, "html", format="md") html = pypandoc.convert_text(content, 'html', format='md')
# print('html:', html) # print('html:', html)
p["text"] = html p["text"] = html
# except FileNotFoundError: # except FileNotFoundError:
# p['text'] = 'ERROR' # p['text'] = 'ERROR'
# ADD IN LINK TO PAD AS "link" # ADD IN LINK TO PAD AS "link"
for v in linkversions: for v in linkversions:
@ -419,6 +319,6 @@ def main(args):
if args.output: if args.output:
with open(args.output, "w") as f: with open(args.output, "w") as f:
print(template.render(vars(args)), file=f) print (template.render(vars(args)), file=f)
else: else:
print(template.render(vars(args))) print (template.render(vars(args)))

View File

@ -1,578 +1,271 @@
"""Check for pads that have changed since last sync (according to .meta.json)""" from __future__ import print_function
import json
import os
import re
import sys
import time
from argparse import ArgumentParser from argparse import ArgumentParser
import sys, json, re, os
from datetime import datetime from datetime import datetime
from fnmatch import fnmatch
from urllib.parse import quote, urlencode
from urllib.request import HTTPError
from xml.etree import ElementTree as ET
import asks try:
import html5lib # python2
import trio from urllib2 import urlopen, URLError, HTTPError
from urllib import urlencode
except ImportError:
# python3
from urllib.parse import urlencode, quote
from urllib.request import urlopen, URLError, HTTPError
from etherpump.commands.common import * # noqa from etherpump.commands.common import *
from time import sleep
from etherpump.commands.html5tidy import html5tidy from etherpump.commands.html5tidy import html5tidy
import html5lib
from xml.etree import ElementTree as ET
from fnmatch import fnmatch
# debugging
# import ElementTree as ET
""" """
pull(meta): pull(meta):
Update meta data files for those that have changed. Update meta data files for those that have changed.
Check for changed pads by looking at revisions & comparing to existing Check for changed pads by looking at revisions & comparing to existing
todo... todo...
use/prefer public interfaces ? (export functions) use/prefer public interfaces ? (export functions)
""" """
# Note(decentral1se): simple globals counting def try_deleting (files):
skipped, saved = 0, 0
async def try_deleting(files):
for f in files: for f in files:
try: try:
path = trio.Path(f) os.remove(f)
if os.path.exists(path): except OSError as e:
await path.rmdir() pass
except Exception as exception:
print("PANIC: {}".format(exception))
def main (args):
p = ArgumentParser("Check for pads that have changed since last sync (according to .meta.json)")
def build_argument_parser(args): p.add_argument("padid", nargs="*", default=[])
parser = ArgumentParser( p.add_argument("--glob", default=False, help="download pads matching a glob pattern")
"Check for pads that have changed since last sync (according to .meta.json)"
)
parser.add_argument("padid", nargs="*", default=[])
parser.add_argument(
"--glob", default=False, help="download pads matching a glob pattern"
)
parser.add_argument(
"--padinfo",
default=".etherpump/settings.json",
help="settings, default: .etherpump/settings.json",
)
parser.add_argument(
"--zerorevs",
default=False,
action="store_true",
help="include pads with zero revisions, default: False (i.e. pads with no revisions are skipped)",
)
parser.add_argument(
"--pub",
default="p",
help="folder to store files for public pads, default: p",
)
parser.add_argument(
"--group",
default="g",
help="folder to store files for group pads, default: g",
)
parser.add_argument(
"--skip",
default=None,
type=int,
help="skip this many items, default: None",
)
parser.add_argument(
"--connection",
default=50,
type=int,
help="number of connections to run concurrently",
)
parser.add_argument(
"--meta",
default=False,
action="store_true",
help="download meta to PADID.meta.json, default: False",
)
parser.add_argument(
"--text",
default=False,
action="store_true",
help="download text to PADID.txt, default: False",
)
parser.add_argument(
"--html",
default=False,
action="store_true",
help="download html to PADID.html, default: False",
)
parser.add_argument(
"--dhtml",
default=False,
action="store_true",
help="download dhtml to PADID.diff.html, default: False",
)
parser.add_argument(
"--all",
default=False,
action="store_true",
help="download all files (meta, text, html, dhtml), default: False",
)
parser.add_argument(
"--folder",
default=False,
action="store_true",
help="dump files in a folder named PADID (meta, text, html, dhtml), default: False",
)
parser.add_argument(
"--output",
default=False,
action="store_true",
help="output changed padids on stdout",
)
parser.add_argument(
"--force",
default=False,
action="store_true",
help="reload, even if revisions count matches previous",
)
parser.add_argument(
"--no-raw-ext",
default=False,
action="store_true",
help="save plain text as padname with no (additional) extension",
)
parser.add_argument(
"--fix-names",
default=False,
action="store_true",
help="normalize padid's (no spaces, special control chars) for use in file names",
)
parser.add_argument(
"--filter-ext", default=None, help="filter pads by extension"
)
parser.add_argument(
"--css",
default="/styles.css",
help="add css url to output pages, default: /styles.css",
)
parser.add_argument(
"--script",
default="/versions.js",
help="add script url to output pages, default: /versions.js",
)
parser.add_argument(
"--nopublish",
default="__NOPUBLISH__",
help="no publish magic word, default: __NOPUBLISH__",
)
parser.add_argument(
"--publish",
default="__PUBLISH__",
help="the publish magic word, default: __PUBLISH__",
)
parser.add_argument(
"--publish-opt-in",
default=False,
action="store_true",
help="ensure `--publish` is honoured instead of `--nopublish`",
)
parser.add_argument(
"--magicwords",
default=False,
action="store_true",
help="download html to PADID.magicwords.html",
)
return parser
p.add_argument("--padinfo", default=".etherpump/settings.json", help="settings, default: .etherpump/settings.json")
p.add_argument("--zerorevs", default=False, action="store_true", help="include pads with zero revisions, default: False (i.e. pads with no revisions are skipped)")
p.add_argument("--pub", default="p", help="folder to store files for public pads, default: p")
p.add_argument("--group", default="g", help="folder to store files for group pads, default: g")
p.add_argument("--skip", default=None, type=int, help="skip this many items, default: None")
p.add_argument("--meta", default=False, action="store_true", help="download meta to PADID.meta.json, default: False")
p.add_argument("--text", default=False, action="store_true", help="download text to PADID.txt, default: False")
p.add_argument("--html", default=False, action="store_true", help="download html to PADID.html, default: False")
p.add_argument("--dhtml", default=False, action="store_true", help="download dhtml to PADID.diff.html, default: False")
p.add_argument("--all", default=False, action="store_true", help="download all files (meta, text, html, dhtml), default: False")
p.add_argument("--folder", default=False, action="store_true", help="dump files in a folder named PADID (meta, text, html, dhtml), default: False")
p.add_argument("--output", default=False, action="store_true", help="output changed padids on stdout")
p.add_argument("--force", default=False, action="store_true", help="reload, even if revisions count matches previous")
p.add_argument("--no-raw-ext", default=False, action="store_true", help="save plain text as padname with no (additional) extension")
p.add_argument("--fix-names", default=False, action="store_true", help="normalize padid's (no spaces, special control chars) for use in file names")
async def get_padids(args, info, data, session): p.add_argument("--filter-ext", default=None, help="filter pads by extension")
if args.padid:
padids = args.padid
elif args.glob:
url = info["localapiurl"] + "listAllPads?" + urlencode(data)
padids = await agetjson(session, url)
padids = padids["data"]["padIDs"]
padids = [x for x in padids if fnmatch(x, args.glob)]
else:
url = info["localapiurl"] + "listAllPads?" + urlencode(data)
padids = await agetjson(session, url)
padids = padids["data"]["padIDs"]
padids.sort() p.add_argument("--css", default="/styles.css", help="add css url to output pages, default: /styles.css")
return padids p.add_argument("--script", default="/versions.js", help="add script url to output pages, default: /versions.js")
p.add_argument("--nopublish", default="__NOPUBLISH__", help="no publish magic word, default: __NOPUBLISH__")
p.add_argument("--publish", default="__PUBLISH__", help="the publish magic word, default: __PUBLISH__")
p.add_argument("--publish-opt-in", default=False, action="store_true", help="ensure `--publish` is honoured instead of `--nopublish`")
async def handle_pad(args, padid, data, info, session): args = p.parse_args(args)
global skipped, saved
raw_ext = ".raw.txt" raw_ext = ".raw.txt"
if args.no_raw_ext: if args.no_raw_ext:
raw_ext = "" raw_ext = ""
data["padID"] = padid info = loadpadinfo(args.padinfo)
p = padpath(padid, args.pub, args.group, args.fix_names) data = {}
if args.folder: data['apikey'] = info['apikey']
p = os.path.join(p, padid)
metapath = p + ".meta.json" if args.padid:
revisions = None padids = args.padid
tries = 1 elif args.glob:
skip = False padids = getjson(info['localapiurl']+'listAllPads?'+urlencode(data))['data']['padIDs']
padurlbase = re.sub(r"api/1.2.9/$", "p/", info["apiurl"]) padids = [x for x in padids if fnmatch(x, args.glob)]
meta = {} else:
padids = getjson(info['localapiurl']+'listAllPads?'+urlencode(data))['data']['padIDs']
padids.sort()
numpads = len(padids)
# maxmsglen = 0
count = 0
for i, padid in enumerate(padids):
if args.skip != None and i<args.skip:
continue
progressbar(i, numpads, padid)
data['padID'] = padid.encode("utf-8")
p = padpath(padid, args.pub, args.group, args.fix_names)
if args.folder:
p = os.path.join(p, padid.encode("utf-8"))
while True: metapath = p + ".meta.json"
try: revisions = None
if os.path.exists(metapath): tries = 1
async with await trio.open_file(metapath) as f: skip = False
contents = await f.read() padurlbase = re.sub(r"api/1.2.9/$", "p/", info["apiurl"])
meta.update(json.loads(contents)) meta = {}
url = ( # if type(padurlbase) == unicode:
info["localapiurl"] + "getRevisionsCount?" + urlencode(data) # padurlbase = padurlbase.encode("utf-8")
) while True:
response = await agetjson(session, url) try:
revisions = response["data"]["revisions"] if os.path.exists(metapath):
if meta["revisions"] == revisions and not args.force: with open(metapath) as f:
skip = True meta.update(json.load(f))
reason = "No new revisions, we already have the latest local copy" revisions = getjson(info['localapiurl']+'getRevisionsCount?'+urlencode(data))['data']['revisions']
if meta['revisions'] == revisions and not args.force:
skip=True
break
meta['padid'] = padid # .encode("utf-8")
versions = meta["versions"] = []
versions.append({
"url": padurlbase + quote(padid),
"type": "pad",
"code": 200
})
if revisions == None:
meta['revisions'] = getjson(info['localapiurl']+'getRevisionsCount?'+urlencode(data))['data']['revisions']
else:
meta['revisions' ] = revisions
if (meta['revisions'] == 0) and (not args.zerorevs):
# print("Skipping zero revs", file=sys.stderr)
skip=True
break break
meta["padid"] = padid # todo: load more metadata!
versions = meta["versions"] = [] meta['group'], meta['pad'] = splitpadname(padid)
versions.append( meta['pathbase'] = p
{"url": padurlbase + quote(padid), "type": "pad", "code": 200,} meta['lastedited_raw'] = int(getjson(info['localapiurl']+'getLastEdited?'+urlencode(data))['data']['lastEdited'])
) meta['lastedited_iso'] = datetime.fromtimestamp(int(meta['lastedited_raw'])/1000).isoformat()
meta['author_ids'] = getjson(info['localapiurl']+'listAuthorsOfPad?'+urlencode(data))['data']['authorIDs']
if revisions is None: break
url = ( except HTTPError as e:
info["localapiurl"] + "getRevisionsCount?" + urlencode(data) tries += 1
) if tries > 3:
response = await agetjson(session, url) print ("Too many failures ({0}), skipping".format(padid), file=sys.stderr)
meta["revisions"] = response["data"]["revisions"] skip=True
else: break
meta["revisions"] = revisions else:
sleep(3)
if (meta["revisions"] == 0) and (not args.zerorevs): except TypeError as e:
skip = True print ("Type Error loading pad {0} (phantom pad?), skipping".format(padid), file=sys.stderr)
reason = "0 revisions, this pad was never edited" skip=True
break break
# todo: load more metadata! if skip:
meta["group"], meta["pad"] = splitpadname(padid) continue
meta["pathbase"] = p
url = info["localapiurl"] + "getLastEdited?" + urlencode(data) count += 1
response = await agetjson(session, url)
meta["lastedited_raw"] = int(response["data"]["lastEdited"])
meta["lastedited_iso"] = datetime.fromtimestamp( if args.output:
int(meta["lastedited_raw"]) / 1000 print (padid)
).isoformat()
url = info["localapiurl"] + "listAuthorsOfPad?" + urlencode(data) if args.all or (args.meta or args.text or args.html or args.dhtml):
response = await agetjson(session, url) try:
meta["author_ids"] = response["data"]["authorIDs"] os.makedirs(os.path.split(metapath)[0])
except OSError:
pass
break if args.all or args.text:
except HTTPError as e: text = getjson(info['localapiurl']+'getText?'+urlencode(data))
tries += 1 ver = {"type": "text"}
if tries > 3: versions.append(ver)
print( ver["code"] = text["_code"]
"Too many failures ({0}), skipping".format(padid), if text["_code"] == 200:
file=sys.stderr, text = text['data']['text']
)
skip = True
reason = "PANIC, couldn't download the pad contents"
break
else:
await trio.sleep(1)
except TypeError as e:
print(
"Type Error loading pad {0} (phantom pad?), skipping".format(
padid
),
file=sys.stderr,
)
skip = True
reason = "PANIC, couldn't download the pad contents"
break
if skip: ##########################################
print("[ ] {} (skipped, reason: {})".format(padid, reason)) ## ENFORCE __NOPUBLISH__ MAGIC WORD
skipped += 1 ##########################################
return if args.nopublish and args.nopublish in text:
# NEED TO PURGE ANY EXISTING DOCS
try_deleting((p+raw_ext,p+".raw.html",p+".diff.html",p+".meta.json"))
continue
if args.output: ##########################################
print(padid) ## ENFORCE __PUBLISH__ MAGIC WORD
##########################################
if args.publish_opt_in and args.publish not in text:
try_deleting((p+raw_ext,p+".raw.html",p+".diff.html",p+".meta.json"))
continue
if args.all or (args.meta or args.text or args.html or args.dhtml): ver["path"] = p+raw_ext
try: ver["url"] = quote(ver["path"])
path = trio.Path(os.path.split(metapath)[0]) with open(ver["path"], "w") as f:
if not os.path.exists(path): f.write(text)
await path.mkdir() # once the content is settled, compute a hash
except OSError: # and link it in the metadata!
# Note(decentral1se): the path already exists
pass
if args.all or args.text: links = []
url = info["localapiurl"] + "getText?" + urlencode(data) if args.css:
text = await agetjson(session, url) links.append({"href":args.css, "rel":"stylesheet"})
ver = {"type": "text"} # todo, make this process reflect which files actually were made
versions.append(ver) versionbaseurl = quote(padid)
ver["code"] = text["_code"] links.append({"href":versions[0]["url"], "rel":"alternate", "type":"text/html", "title":"Etherpad"})
if args.all or args.text:
links.append({"href":versionbaseurl+raw_ext, "rel":"alternate", "type":"text/plain", "title":"Plain text"})
if args.all or args.html:
links.append({"href":versionbaseurl+".raw.html", "rel":"alternate", "type":"text/html", "title":"HTML"})
if args.all or args.dhtml:
links.append({"href":versionbaseurl+".diff.html", "rel":"alternate", "type":"text/html", "title":"HTML with author colors"})
if args.all or args.meta:
links.append({"href":versionbaseurl+".meta.json", "rel":"alternate", "type":"application/json", "title":"Meta data"})
if text["_code"] == 200: # links.append({"href":"/", "rel":"search", "type":"text/html", "title":"Index"})
text = text["data"]["text"]
########################################## if args.all or args.dhtml:
## ENFORCE __NOPUBLISH__ MAGIC WORD data['startRev'] = "0"
########################################## html = getjson(info['localapiurl']+'createDiffHTML?'+urlencode(data))
if args.nopublish in text: ver = {"type": "diffhtml"}
await try_deleting( versions.append(ver)
( ver["code"] = html["_code"]
p + raw_ext, if html["_code"] == 200:
p + ".raw.html",
p + ".diff.html",
p + ".meta.json",
)
)
print(
"[ ] {} (deleted, reason: explicit __NOPUBLISH__)".format(
padid
)
)
skipped += 1
return False
##########################################
## ENFORCE __PUBLISH__ MAGIC WORD
##########################################
if args.publish_opt_in and args.publish not in text:
await try_deleting(
(
p + raw_ext,
p + ".raw.html",
p + ".diff.html",
p + ".meta.json",
)
)
print("[ ] {} (deleted, reason: publish opt-out)".format(padid))
skipped += 1
return False
ver["path"] = p + raw_ext
ver["url"] = quote(ver["path"])
async with await trio.open_file(ver["path"], "w") as f:
try: try:
# Note(decentral1se): unicode handling... html = html['data']['html']
safe_text = text.encode("utf-8", "replace").decode() ver["path"] = p+".diff.html"
await f.write(safe_text) ver["url"] = quote(ver["path"])
except Exception as exception: # doc = html5lib.parse(html, treebuilder="etree", override_encoding="utf-8", namespaceHTMLElements=False)
print("PANIC: {}".format(exception)) doc = html5lib.parse(html, treebuilder="etree", namespaceHTMLElements=False)
html5tidy(doc, indent=True, title=padid, scripts=args.script, links=links)
with open(ver["path"], "w") as f:
# f.write(html.encode("utf-8"))
print(ET.tostring(doc, method="html", encoding="unicode"), file=f)
except TypeError:
# Malformed / incomplete response, record the message (such as "internal error") in the metadata and write NO file!
ver["message"] = html["message"]
# with open(ver["path"], "w") as f:
# print ("""<pre>{0}</pre>""".format(json.dumps(html, indent=2)), file=f)
# once the content is settled, compute a hash # Process text, html, dhtml, all options
# and link it in the metadata! if args.all or args.html:
html = getjson(info['localapiurl']+'getHTML?'+urlencode(data))
########################################## ver = {"type": "html"}
# INCLUDE __XXX__ MAGIC WORDS versions.append(ver)
########################################## ver["code"] = html["_code"]
if args.all or args.magicwords: if html["_code"] == 200:
pattern = r"__[a-zA-Z0-9]+?__" html = html['data']['html']
all_matches = re.findall(pattern, text) ver["path"] = p+".raw.html"
magic_words = list(set(all_matches))
if magic_words:
meta["magicwords"] = magic_words
links = []
if args.css:
links.append({"href": args.css, "rel": "stylesheet"})
# todo, make this process reflect which files actually were made
versionbaseurl = quote(padid)
links.append(
{
"href": versions[0]["url"],
"rel": "alternate",
"type": "text/html",
"title": "Etherpad",
}
)
if args.all or args.text:
links.append(
{
"href": versionbaseurl + raw_ext,
"rel": "alternate",
"type": "text/plain",
"title": "Plain text",
}
)
if args.all or args.html:
links.append(
{
"href": versionbaseurl + ".raw.html",
"rel": "alternate",
"type": "text/html",
"title": "HTML",
}
)
if args.all or args.dhtml:
links.append(
{
"href": versionbaseurl + ".diff.html",
"rel": "alternate",
"type": "text/html",
"title": "HTML with author colors",
}
)
if args.all or args.meta:
links.append(
{
"href": versionbaseurl + ".meta.json",
"rel": "alternate",
"type": "application/json",
"title": "Meta data",
}
)
if args.all or args.dhtml:
data["startRev"] = "0"
url = info["localapiurl"] + "createDiffHTML?" + urlencode(data)
dhtml = await agetjson(session, url)
ver = {"type": "diffhtml"}
versions.append(ver)
ver["code"] = dhtml["_code"]
if dhtml["_code"] == 200:
try:
dhtml_body = dhtml["data"]["html"]
ver["path"] = p + ".diff.html"
ver["url"] = quote(ver["path"]) ver["url"] = quote(ver["path"])
doc = html5lib.parse( doc = html5lib.parse(html, treebuilder="etree", namespaceHTMLElements=False)
dhtml_body, treebuilder="etree", namespaceHTMLElements=False html5tidy(doc, indent=True, title=padid, scripts=args.script, links=links)
) with open(ver["path"], "w") as f:
html5tidy( # f.write(html.encode("utf-8"))
doc, print (ET.tostring(doc, method="html", encoding="unicode"), file=f)
indent=True,
title=padid,
scripts=args.script,
links=links,
)
async with await trio.open_file(ver["path"], "w") as f:
output = ET.tostring(doc, method="html", encoding="unicode")
await f.write(output)
except TypeError:
ver["message"] = dhtml["message"]
# Process text, html, dhtml, magicwords and all options # output meta
downloaded_html = False if args.all or args.meta:
if args.all or args.html: ver = {"type": "meta"}
url = info["localapiurl"] + "getHTML?" + urlencode(data) versions.append(ver)
html = await agetjson(session, url) ver["path"] = metapath
ver = {"type": "html"} ver["url"] = quote(metapath)
versions.append(ver) with open(metapath, "w") as f:
ver["code"] = html["_code"] json.dump(meta, f, indent=2)
downloaded_html = True
if html["_code"] == 200: print("\n{0} pad(s) loaded".format(count), file=sys.stderr)
try:
html_body = html["data"]["html"]
ver["path"] = p + ".raw.html"
ver["url"] = quote(ver["path"])
doc = html5lib.parse(
html_body, treebuilder="etree", namespaceHTMLElements=False
)
html5tidy(
doc,
indent=True,
title=padid,
scripts=args.script,
links=links,
)
async with await trio.open_file(ver["path"], "w") as f:
output = ET.tostring(doc, method="html", encoding="unicode")
await f.write(output)
except TypeError:
ver["message"] = html["message"]
if args.all or args.magicwords:
if not downloaded_html:
html = await agetjson(session, url)
ver = {"type": "magicwords"}
versions.append(ver)
ver["code"] = html["_code"]
if html["_code"] == 200:
try:
html_body = html["data"]["html"]
ver["path"] = p + ".magicwords.html"
ver["url"] = quote(ver["path"])
for magic_word in magic_words:
replace_word = (
"<span class='highlight'>" + magic_word + "</span>"
)
if magic_word in html_body:
html_body = html_body.replace(magic_word, replace_word)
doc = html5lib.parse(
html_body, treebuilder="etree", namespaceHTMLElements=False
)
html5tidy(
doc,
indent=True,
title=padid,
scripts=args.script,
links=links,
)
async with await trio.open_file(ver["path"], "w") as f:
output = ET.tostring(doc, method="html", encoding="unicode")
await f.write(output)
except TypeError:
ver["message"] = html["message"]
# output meta
if args.all or args.meta:
ver = {"type": "meta"}
versions.append(ver)
ver["path"] = metapath
ver["url"] = quote(metapath)
async with await trio.open_file(metapath, "w") as f:
await f.write(json.dumps(meta))
try:
mwords_msg = ", magic words: {}".format(", ".join(magic_words))
except UnboundLocalError:
mwords_msg = "" # Note(decentral1se): for when magic_words are not counted
print("[x] {} (saved{})".format(padid, mwords_msg))
saved += 1
return
async def handle_pads(args):
global skipped, saved
session = asks.Session(connections=args.connection)
info = loadpadinfo(args.padinfo)
data = {"apikey": info["apikey"]}
padids = await get_padids(args, info, data, session)
if args.skip:
padids = padids[args.skip : len(padids)]
print("=" * 79)
print("Etherpump is warming up the engines ...")
print("=" * 79)
start = time.time()
async with trio.open_nursery() as nursery:
for padid in padids:
nursery.start_soon(
handle_pad, args, padid, data.copy(), info, session
)
end = time.time()
timeit = round(end - start, 2)
print("=" * 79)
print(
"Processed {} :: Skipped {} :: Saved {} :: Time {}s".format(
len(padids), skipped, saved, timeit
)
)
print("=" * 79)
def main(args):
p = build_argument_parser(args)
args = p.parse_args(args)
trio.run(handle_pads, args)

View File

@ -1,20 +1,13 @@
"""Call getRevisionsCount for the given padid""" from __future__ import print_function
import json
from argparse import ArgumentParser from argparse import ArgumentParser
from urllib.error import HTTPError, URLError import json
from urllib.parse import urlencode from urllib import urlencode
from urllib.request import urlopen from urllib2 import urlopen, HTTPError, URLError
def main(args): def main(args):
p = ArgumentParser("call getRevisionsCount for the given padid") p = ArgumentParser("call getRevisionsCount for the given padid")
p.add_argument("padid", help="the padid") p.add_argument("padid", help="the padid")
p.add_argument( p.add_argument("--padinfo", default=".etherpump/settings.json", help="settings, default: .etherdump/settings.json")
"--padinfo",
default=".etherpump/settings.json",
help="settings, default: .etherdump/settings.json",
)
p.add_argument("--showurl", default=False, action="store_true") p.add_argument("--showurl", default=False, action="store_true")
args = p.parse_args(args) args = p.parse_args(args)
@ -22,11 +15,11 @@ def main(args):
info = json.load(f) info = json.load(f)
apiurl = info.get("apiurl") apiurl = info.get("apiurl")
data = {} data = {}
data["apikey"] = info["apikey"] data['apikey'] = info['apikey']
data["padID"] = args.padid data['padID'] = args.padid.encode("utf-8")
requesturl = apiurl + "getRevisionsCount?" + urlencode(data) requesturl = apiurl+'getRevisionsCount?'+urlencode(data)
if args.showurl: if args.showurl:
print(requesturl) print (requesturl)
else: else:
results = json.load(urlopen(requesturl))["data"]["revisions"] results = json.load(urlopen(requesturl))['data']['revisions']
print(results) print (results)

View File

@ -1,62 +1,39 @@
"""Calls the setHTML API function for the given padid""" from __future__ import print_function
import json
import sys
from argparse import ArgumentParser from argparse import ArgumentParser
from urllib.parse import urlencode import json, sys
from urllib.request import urlopen from urllib import urlencode
from urllib2 import urlopen, HTTPError, URLError
import requests import requests
LIMIT_BYTES = 100 * 1000
LIMIT_BYTES = 100*1000
def main(args): def main(args):
p = ArgumentParser("calls the setHTML API function for the given padid") p = ArgumentParser("calls the setHTML API function for the given padid")
p.add_argument("padid", help="the padid") p.add_argument("padid", help="the padid")
p.add_argument( p.add_argument("--html", default=None, help="html, default: read from stdin")
"--html", default=None, help="html, default: read from stdin" p.add_argument("--padinfo", default=".etherpump/settings.json", help="settings, default: .etherdump/settings.json")
)
p.add_argument(
"--padinfo",
default=".etherpump/settings.json",
help="settings, default: .etherdump/settings.json",
)
p.add_argument("--showurl", default=False, action="store_true") p.add_argument("--showurl", default=False, action="store_true")
# p.add_argument("--format", default="text", help="output format, can be: text, json; default: text") # p.add_argument("--format", default="text", help="output format, can be: text, json; default: text")
p.add_argument( p.add_argument("--create", default=False, action="store_true", help="flag to create pad if necessary")
"--create", p.add_argument("--limit", default=False, action="store_true", help="limit text to 100k (etherpad limit)")
default=False,
action="store_true",
help="flag to create pad if necessary",
)
p.add_argument(
"--limit",
default=False,
action="store_true",
help="limit text to 100k (etherpad limit)",
)
args = p.parse_args(args) args = p.parse_args(args)
with open(args.padinfo) as f: with open(args.padinfo) as f:
info = json.load(f) info = json.load(f)
apiurl = info.get("apiurl") apiurl = info.get("apiurl")
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info) # apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
# data = {} # data = {}
# data['apikey'] = info['apikey'] # data['apikey'] = info['apikey']
# data['padID'] = args.padid # is utf-8 encoded # data['padID'] = args.padid # is utf-8 encoded
createPad = False createPad = False
if args.create: if args.create:
# check if it's in fact necessary # check if it's in fact necessary
requesturl = ( requesturl = apiurl+'getRevisionsCount?'+urlencode({'apikey': info['apikey'], 'padID': args.padid})
apiurl
+ "getRevisionsCount?"
+ urlencode({"apikey": info["apikey"], "padID": args.padid})
)
results = json.load(urlopen(requesturl)) results = json.load(urlopen(requesturl))
print(json.dumps(results, indent=2), file=sys.stderr) print (json.dumps(results, indent=2), file=sys.stderr)
if results["code"] != 0: if results['code'] != 0:
createPad = True createPad = True
if args.html: if args.html:
@ -65,31 +42,25 @@ def main(args):
html = sys.stdin.read() html = sys.stdin.read()
params = {} params = {}
params["apikey"] = info["apikey"] params['apikey'] = info['apikey']
params["padID"] = args.padid params['padID'] = args.padid
if createPad: if createPad:
requesturl = apiurl + "createPad" requesturl = apiurl+'createPad'
if args.showurl: if args.showurl:
print(requesturl) print (requesturl)
results = requests.post( results = requests.post(requesturl, params=params, data={'text': ''}) # json.load(urlopen(requesturl))
requesturl, params=params, data={"text": ""}
) # json.load(urlopen(requesturl))
results = json.loads(results.text) results = json.loads(results.text)
print(json.dumps(results, indent=2)) print (json.dumps(results, indent=2))
if len(html) > LIMIT_BYTES and args.limit: if len(html) > LIMIT_BYTES and args.limit:
print("limiting", len(text), LIMIT_BYTES, file=sys.stderr) print ("limiting", len(text), LIMIT_BYTES, file=sys.stderr)
html = html[:LIMIT_BYTES] html = html[:LIMIT_BYTES]
requesturl = apiurl + "setHTML" requesturl = apiurl+'setHTML'
if args.showurl: if args.showurl:
print(requesturl) print (requesturl)
# params['html'] = html # params['html'] = html
results = requests.post( results = requests.post(requesturl, params={'apikey': info['apikey']}, data={'apikey': info['apikey'], 'padID': args.padid, 'html': html}) # json.load(urlopen(requesturl))
requesturl,
params={"apikey": info["apikey"]},
data={"apikey": info["apikey"], "padID": args.padid, "html": html},
) # json.load(urlopen(requesturl))
results = json.loads(results.text) results = json.loads(results.text)
print(json.dumps(results, indent=2)) print (json.dumps(results, indent=2))

View File

@ -1,41 +1,30 @@
"""Calls the getText API function for the given padid""" from __future__ import print_function
import json
import sys
from argparse import ArgumentParser from argparse import ArgumentParser
from urllib.parse import urlencode import json, sys
from urllib.request import urlopen
try:
# python2
from urllib2 import urlopen, URLError, HTTPError
from urllib import urlencode
except ImportError:
# python3
from urllib.parse import urlencode, quote
from urllib.request import urlopen, URLError, HTTPError
import requests import requests
LIMIT_BYTES = 100 * 1000
LIMIT_BYTES = 100*1000
def main(args): def main(args):
p = ArgumentParser("calls the getText API function for the given padid") p = ArgumentParser("calls the getText API function for the given padid")
p.add_argument("padid", help="the padid") p.add_argument("padid", help="the padid")
p.add_argument( p.add_argument("--text", default=None, help="text, default: read from stdin")
"--text", default=None, help="text, default: read from stdin" p.add_argument("--padinfo", default=".etherpump/settings.json", help="settings, default: .etherdump/settings.json")
)
p.add_argument(
"--padinfo",
default=".etherpump/settings.json",
help="settings, default: .etherdump/settings.json",
)
p.add_argument("--showurl", default=False, action="store_true") p.add_argument("--showurl", default=False, action="store_true")
# p.add_argument("--format", default="text", help="output format, can be: text, json; default: text") # p.add_argument("--format", default="text", help="output format, can be: text, json; default: text")
p.add_argument( p.add_argument("--create", default=False, action="store_true", help="flag to create pad if necessary")
"--create", p.add_argument("--limit", default=False, action="store_true", help="limit text to 100k (etherpad limit)")
default=False,
action="store_true",
help="flag to create pad if necessary",
)
p.add_argument(
"--limit",
default=False,
action="store_true",
help="limit text to 100k (etherpad limit)",
)
args = p.parse_args(args) args = p.parse_args(args)
with open(args.padinfo) as f: with open(args.padinfo) as f:
@ -43,15 +32,15 @@ def main(args):
apiurl = info.get("apiurl") apiurl = info.get("apiurl")
# apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info) # apiurl = "{0[protocol]}://{0[hostname]}:{0[port]}{0[apiurl]}{0[apiversion]}/".format(info)
data = {} data = {}
data["apikey"] = info["apikey"] data['apikey'] = info['apikey']
data["padID"] = args.padid # is utf-8 encoded data['padID'] = args.padid # is utf-8 encoded
createPad = False createPad = False
if args.create: if args.create:
requesturl = apiurl + "getRevisionsCount?" + urlencode(data) requesturl = apiurl+'getRevisionsCount?'+urlencode(data)
results = json.load(urlopen(requesturl)) results = json.load(urlopen(requesturl))
# print (json.dumps(results, indent=2)) # print (json.dumps(results, indent=2))
if results["code"] != 0: if results['code'] != 0:
createPad = True createPad = True
if args.text: if args.text:
@ -60,26 +49,20 @@ def main(args):
text = sys.stdin.read() text = sys.stdin.read()
if len(text) > LIMIT_BYTES and args.limit: if len(text) > LIMIT_BYTES and args.limit:
print("limiting", len(text), LIMIT_BYTES) print ("limiting", len(text), LIMIT_BYTES)
text = text[:LIMIT_BYTES] text = text[:LIMIT_BYTES]
data["text"] = text data['text'] = text
if createPad: if createPad:
requesturl = apiurl + "createPad" requesturl = apiurl+'createPad'
else: else:
requesturl = apiurl + "setText" requesturl = apiurl+'setText'
if args.showurl: if args.showurl:
print(requesturl) print (requesturl)
results = requests.post( results = requests.post(requesturl, params=data) # json.load(urlopen(requesturl))
requesturl, params=data
) # json.load(urlopen(requesturl))
results = json.loads(results.text) results = json.loads(results.text)
if results["code"] != 0: if results['code'] != 0:
print( print ("setText: ERROR ({0}) on pad {1}: {2}".format(results['code'], args.padid, results['message']))
"setText: ERROR ({0}) on pad {1}: {2}".format(
results["code"], args.padid, results["message"]
)
)
# json.dumps(results, indent=2) # json.dumps(results, indent=2)

View File

@ -1,25 +1,17 @@
"""Extract and output selected fields of metadata""" from __future__ import print_function
import json
import re
import sys
from argparse import ArgumentParser from argparse import ArgumentParser
import json, sys, re
from .common import * # noqa from common import *
""" """
Extract and output selected fields of metadata Extract and output selected fields of metadata
""" """
def main (args):
def main(args): p = ArgumentParser("extract & display meta data from a specific .meta.json file, or for a given padid (nb: it still looks for a .meta.json file)")
p = ArgumentParser(
"extract & display meta data from a specific .meta.json file, or for a given padid (nb: it still looks for a .meta.json file)"
)
p.add_argument("--path", default=None, help="read from a meta.json file") p.add_argument("--path", default=None, help="read from a meta.json file")
p.add_argument("--padid", default=None, help="read meta for this padid") p.add_argument("--padid", default=None, help="read meta for this padid")
p.add_argument( p.add_argument("--format", default="{padid}", help="format str, default: {padid}")
"--format", default="{padid}", help="format str, default: {padid}"
)
args = p.parse_args(args) args = p.parse_args(args)
path = args.path path = args.path
@ -27,7 +19,7 @@ def main(args):
path = padpath(args.padid) + ".meta.json" path = padpath(args.padid) + ".meta.json"
if not path: if not path:
print("Must specify either --path or --padid") print ("Must specify either --path or --padid")
sys.exit(-1) sys.exit(-1)
with open(path) as f: with open(path) as f:
@ -35,4 +27,5 @@ def main(args):
formatstr = args.format.decode("utf-8") formatstr = args.format.decode("utf-8")
formatstr = re.sub(r"{(\w+)}", r"{0[\1]}", formatstr) formatstr = re.sub(r"{(\w+)}", r"{0[\1]}", formatstr)
print(formatstr.format(meta)) print (formatstr.format(meta).encode("utf-8"))

View File

@ -1,9 +1,11 @@
"""Update meta data files for those that have changed""" from __future__ import print_function
import os
from argparse import ArgumentParser from argparse import ArgumentParser
from urllib.parse import urlencode import sys, json, re, os
from datetime import datetime
from .common import * # noqa from urllib import urlencode
from urllib2 import urlopen, HTTPError, URLError
from math import ceil, floor
from common import *
""" """
status (meta): status (meta):
@ -15,22 +17,20 @@ design decisions...
ok based on the fact that only the txt file is pushable (via setText) ok based on the fact that only the txt file is pushable (via setText)
it makes sense to give this file "primacy" ... ie to put the other forms it makes sense to give this file "primacy" ... ie to put the other forms
(html, diff.html) in a special place (if created at all). Otherwise this (html, diff.html) in a special place (if created at all). Otherwise this
complicates the "syncing" idea.... complicates the "syncing" idea....
""" """
class PadItemException (Exception):
class PadItemException(Exception):
pass pass
class PadItem ():
class PadItem: def __init__ (self, padid=None, path=None, padexists=False):
def __init__(self, padid=None, path=None, padexists=False):
self.padexists = padexists self.padexists = padexists
if padid and path: if padid and path:
raise PadItemException("only give padid or path") raise PadItemException("only give padid or path")
if not (padid or path): if not (padid or path):
raise PadItemException("either padid or path must be specified") raise PadItemException("either padid or path must be specified")
if padid: if padid:
self.padid = padid self.padid = padid
self.path = padpath(padid, group_path="g") self.path = padpath(padid, group_path="g")
@ -39,7 +39,7 @@ class PadItem:
self.padid = padpath2id(path) self.padid = padpath2id(path)
@property @property
def status(self): def status (self):
if self.fileexists: if self.fileexists:
if self.padexists: if self.padexists:
return "S" return "S"
@ -51,89 +51,36 @@ class PadItem:
return "?" return "?"
@property @property
def fileexists(self): def fileexists (self):
return os.path.exists(self.path) return os.path.exists(self.path)
def ignore_p (path, settings=None):
def ignore_p(path, settings=None):
if path.startswith("."): if path.startswith("."):
return True return True
def main (args):
def main(args): p = ArgumentParser("Check for pads that have changed since last sync (according to .meta.json)")
p = ArgumentParser(
"Check for pads that have changed since last sync (according to .meta.json)"
)
# p.add_argument("padid", nargs="*", default=[]) # p.add_argument("padid", nargs="*", default=[])
p.add_argument( p.add_argument("--padinfo", default=".etherpump/settings.json", help="settings, default: .etherdump/settings.json")
"--padinfo", p.add_argument("--zerorevs", default=False, action="store_true", help="include pads with zero revisions, default: False (i.e. pads with no revisions are skipped)")
default=".etherpump/settings.json", p.add_argument("--pub", default=".", help="folder to store files for public pads, default: pub")
help="settings, default: .etherdump/settings.json", p.add_argument("--group", default="g", help="folder to store files for group pads, default: g")
) p.add_argument("--skip", default=None, type=int, help="skip this many items, default: None")
p.add_argument( p.add_argument("--meta", default=False, action="store_true", help="download meta to PADID.meta.json, default: False")
"--zerorevs", p.add_argument("--text", default=False, action="store_true", help="download text to PADID.txt, default: False")
default=False, p.add_argument("--html", default=False, action="store_true", help="download html to PADID.html, default: False")
action="store_true", p.add_argument("--dhtml", default=False, action="store_true", help="download dhtml to PADID.dhtml, default: False")
help="include pads with zero revisions, default: False (i.e. pads with no revisions are skipped)", p.add_argument("--all", default=False, action="store_true", help="download all files (meta, text, html, dhtml), default: False")
)
p.add_argument(
"--pub",
default=".",
help="folder to store files for public pads, default: pub",
)
p.add_argument(
"--group",
default="g",
help="folder to store files for group pads, default: g",
)
p.add_argument(
"--skip",
default=None,
type=int,
help="skip this many items, default: None",
)
p.add_argument(
"--meta",
default=False,
action="store_true",
help="download meta to PADID.meta.json, default: False",
)
p.add_argument(
"--text",
default=False,
action="store_true",
help="download text to PADID.txt, default: False",
)
p.add_argument(
"--html",
default=False,
action="store_true",
help="download html to PADID.html, default: False",
)
p.add_argument(
"--dhtml",
default=False,
action="store_true",
help="download dhtml to PADID.dhtml, default: False",
)
p.add_argument(
"--all",
default=False,
action="store_true",
help="download all files (meta, text, html, dhtml), default: False",
)
args = p.parse_args(args) args = p.parse_args(args)
info = loadpadinfo(args.padinfo) info = loadpadinfo(args.padinfo)
data = {} data = {}
data["apikey"] = info["apikey"] data['apikey'] = info['apikey']
padsbypath = {} padsbypath = {}
# listAllPads # listAllPads
padids = getjson(info["apiurl"] + "listAllPads?" + urlencode(data))["data"][ padids = getjson(info['apiurl']+'listAllPads?'+urlencode(data))['data']['padIDs']
"padIDs"
]
padids.sort() padids.sort()
for padid in padids: for padid in padids:
pad = PadItem(padid=padid, padexists=True) pad = PadItem(padid=padid, padexists=True)
@ -148,7 +95,7 @@ def main(args):
pad = PadItem(path=p) pad = PadItem(path=p)
padsbypath[pad.path] = pad padsbypath[pad.path] = pad
pads = list(padsbypath.values()) pads = padsbypath.values()
pads.sort(key=lambda x: (x.status, x.padid)) pads.sort(key=lambda x: (x.status, x.padid))
curstat = None curstat = None
@ -156,9 +103,9 @@ def main(args):
if p.status != curstat: if p.status != curstat:
curstat = p.status curstat = p.status
if curstat == "F": if curstat == "F":
print("New/changed files") print ("New/changed files")
elif curstat == "P": elif curstat == "P":
print("New/changed pads") print ("New/changed pads")
elif curstat == ".": elif curstat == ".":
print("Up to date") print ("Up to date")
print(" ", p.status, p.padid) print (" ", p.status, p.padid)

12
padinfo.sample.json Normal file
View File

@ -0,0 +1,12 @@
{
"protocol": "http",
"port": 9001,
"hostname": "localhost",
"apiversion": "1.2.9",
"apiurl": "/api/",
"apikey": "8f55f9ede1b3f5d88b3c54eb638225a7bb71c64867786b608abacfdb7d418be1",
"groups": {
"71FpVh4MZBvl8VZ6": {"name": "Transmediale", "id": 43},
"HyYfoX3Q6S5utxs5": {"name": "test", "id": 42 }
}
}

809
poetry.lock generated
View File

@ -1,809 +0,0 @@
[[package]]
name = "anyio"
version = "1.4.0"
description = "High level compatibility layer for multiple asynchronous event loop implementations"
category = "main"
optional = false
python-versions = ">=3.5.3"
[package.dependencies]
async-generator = "*"
idna = ">=2.8"
sniffio = ">=1.1"
[package.extras]
curio = ["curio (==0.9)", "curio (>=0.9)"]
doc = ["sphinx-rtd-theme", "sphinx-autodoc-typehints (>=1.2.0)"]
test = ["coverage (>=4.5)", "hypothesis (>=4.0)", "pytest (>=3.7.2)", "uvloop"]
trio = ["trio (>=0.12)"]
[[package]]
name = "appdirs"
version = "1.4.4"
description = "A small Python module for determining appropriate platform-specific dirs, e.g. a \"user data dir\"."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "asks"
version = "2.4.10"
description = "asks - async http"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
anyio = "<2"
async_generator = "*"
h11 = "*"
[[package]]
name = "async-generator"
version = "1.10"
description = "Async generators and context managers for Python 3.5+"
category = "main"
optional = false
python-versions = ">=3.5"
[[package]]
name = "attrs"
version = "20.3.0"
description = "Classes Without Boilerplate"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[package.extras]
dev = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "zope.interface", "furo", "sphinx", "pre-commit"]
docs = ["furo", "sphinx", "zope.interface"]
tests = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six", "zope.interface"]
tests_no_zope = ["coverage[toml] (>=5.0.2)", "hypothesis", "pympler", "pytest (>=4.3.0)", "six"]
[[package]]
name = "black"
version = "19.10b0"
description = "The uncompromising code formatter."
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
appdirs = "*"
attrs = ">=18.1.0"
click = ">=6.5"
pathspec = ">=0.6,<1"
regex = "*"
toml = ">=0.9.4"
typed-ast = ">=1.4.0"
[package.extras]
d = ["aiohttp (>=3.3.2)", "aiohttp-cors"]
[[package]]
name = "certifi"
version = "2020.12.5"
description = "Python package for providing Mozilla's CA Bundle."
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "cffi"
version = "1.14.5"
description = "Foreign Function Interface for Python calling C code."
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
pycparser = "*"
[[package]]
name = "chardet"
version = "4.0.0"
description = "Universal encoding detector for Python 2 and 3"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "click"
version = "7.1.2"
description = "Composable command line interface toolkit"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "contextvars"
version = "2.4"
description = "PEP 567 Backport"
category = "main"
optional = false
python-versions = "*"
[package.dependencies]
immutables = ">=0.9"
[[package]]
name = "flake8"
version = "3.9.0"
description = "the modular source code checker: pep8 pyflakes and co"
category = "dev"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,>=2.7"
[package.dependencies]
importlib-metadata = {version = "*", markers = "python_version < \"3.8\""}
mccabe = ">=0.6.0,<0.7.0"
pycodestyle = ">=2.7.0,<2.8.0"
pyflakes = ">=2.3.0,<2.4.0"
[[package]]
name = "h11"
version = "0.12.0"
description = "A pure-Python, bring-your-own-I/O implementation of HTTP/1.1"
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "html5lib"
version = "1.1"
description = "HTML parser based on the WHATWG HTML specification"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[package.dependencies]
six = ">=1.9"
webencodings = "*"
[package.extras]
all = ["genshi", "chardet (>=2.2)", "lxml"]
chardet = ["chardet (>=2.2)"]
genshi = ["genshi"]
lxml = ["lxml"]
[[package]]
name = "idna"
version = "2.10"
description = "Internationalized Domain Names in Applications (IDNA)"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "immutables"
version = "0.15"
description = "Immutable Collections"
category = "main"
optional = false
python-versions = ">=3.5"
[package.extras]
test = ["flake8 (>=3.8.4,<3.9.0)", "pycodestyle (>=2.6.0,<2.7.0)"]
[[package]]
name = "importlib-metadata"
version = "3.7.3"
description = "Read metadata from Python packages"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.dependencies]
typing-extensions = {version = ">=3.6.4", markers = "python_version < \"3.8\""}
zipp = ">=0.5"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"]
testing = ["pytest (>=3.5,!=3.7.3)", "pytest-checkdocs (>=1.2.3)", "pytest-flake8", "pytest-cov", "pytest-enabler", "packaging", "pep517", "pyfakefs", "flufl.flake8", "pytest-black (>=0.3.7)", "pytest-mypy", "importlib-resources (>=1.3)"]
[[package]]
name = "isort"
version = "5.7.0"
description = "A Python utility / library to sort Python imports."
category = "dev"
optional = false
python-versions = ">=3.6,<4.0"
[package.extras]
pipfile_deprecated_finder = ["pipreqs", "requirementslib"]
requirements_deprecated_finder = ["pipreqs", "pip-api"]
colors = ["colorama (>=0.4.3,<0.5.0)"]
[[package]]
name = "jinja2"
version = "2.11.3"
description = "A very fast and expressive template engine."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[package.dependencies]
MarkupSafe = ">=0.23"
[package.extras]
i18n = ["Babel (>=0.8)"]
[[package]]
name = "markupsafe"
version = "1.1.1"
description = "Safely add untrusted strings to HTML/XML markup."
category = "main"
optional = false
python-versions = ">=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*"
[[package]]
name = "mccabe"
version = "0.6.1"
description = "McCabe checker, plugin for flake8"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "mypy"
version = "0.782"
description = "Optional static typing for Python"
category = "dev"
optional = false
python-versions = ">=3.5"
[package.dependencies]
mypy-extensions = ">=0.4.3,<0.5.0"
typed-ast = ">=1.4.0,<1.5.0"
typing-extensions = ">=3.7.4"
[package.extras]
dmypy = ["psutil (>=4.0)"]
[[package]]
name = "mypy-extensions"
version = "0.4.3"
description = "Experimental type system extensions for programs checked with the mypy typechecker."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "outcome"
version = "1.1.0"
description = "Capture the outcome of Python function calls."
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
attrs = ">=19.2.0"
[[package]]
name = "pathspec"
version = "0.8.1"
description = "Utility library for gitignore style pattern matching of file paths."
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[[package]]
name = "pycodestyle"
version = "2.7.0"
description = "Python style guide checker"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pycparser"
version = "2.20"
description = "C parser in Python"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pyflakes"
version = "2.3.0"
description = "passive checker of Python programs"
category = "dev"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*"
[[package]]
name = "pypandoc"
version = "1.5"
description = "Thin wrapper for pandoc."
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "python-dateutil"
version = "2.8.1"
description = "Extensions to the standard Python datetime module"
category = "main"
optional = false
python-versions = "!=3.0.*,!=3.1.*,!=3.2.*,>=2.7"
[package.dependencies]
six = ">=1.5"
[[package]]
name = "regex"
version = "2021.3.17"
description = "Alternative regular expression module, to replace re."
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "requests"
version = "2.25.1"
description = "Python HTTP for Humans."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*"
[package.dependencies]
certifi = ">=2017.4.17"
chardet = ">=3.0.2,<5"
idna = ">=2.5,<3"
urllib3 = ">=1.21.1,<1.27"
[package.extras]
security = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)"]
socks = ["PySocks (>=1.5.6,!=1.5.7)", "win-inet-pton"]
[[package]]
name = "six"
version = "1.15.0"
description = "Python 2 and 3 compatibility utilities"
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "sniffio"
version = "1.2.0"
description = "Sniff out which async library your code is running under"
category = "main"
optional = false
python-versions = ">=3.5"
[package.dependencies]
contextvars = {version = ">=2.1", markers = "python_version < \"3.7\""}
[[package]]
name = "sortedcontainers"
version = "2.3.0"
description = "Sorted Containers -- Sorted List, Sorted Dict, Sorted Set"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "toml"
version = "0.10.2"
description = "Python Library for Tom's Obvious, Minimal Language"
category = "dev"
optional = false
python-versions = ">=2.6, !=3.0.*, !=3.1.*, !=3.2.*"
[[package]]
name = "trio"
version = "0.17.0"
description = "A friendly Python library for async concurrency and I/O"
category = "main"
optional = false
python-versions = ">=3.6"
[package.dependencies]
async-generator = ">=1.9"
attrs = ">=19.2.0"
cffi = {version = ">=1.14", markers = "os_name == \"nt\" and implementation_name != \"pypy\""}
contextvars = {version = ">=2.1", markers = "python_version < \"3.7\""}
idna = "*"
outcome = "*"
sniffio = "*"
sortedcontainers = "*"
[[package]]
name = "typed-ast"
version = "1.4.2"
description = "a fork of Python 2 and 3 ast modules with type comment support"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "typing-extensions"
version = "3.7.4.3"
description = "Backported and Experimental Type Hints for Python 3.5+"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "urllib3"
version = "1.26.4"
description = "HTTP library with thread-safe connection pooling, file post, and more."
category = "main"
optional = false
python-versions = ">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, <4"
[package.extras]
secure = ["pyOpenSSL (>=0.14)", "cryptography (>=1.3.4)", "idna (>=2.0.0)", "certifi", "ipaddress"]
socks = ["PySocks (>=1.5.6,!=1.5.7,<2.0)"]
brotli = ["brotlipy (>=0.6.0)"]
[[package]]
name = "webencodings"
version = "0.5.1"
description = "Character encoding aliases for legacy web content"
category = "main"
optional = false
python-versions = "*"
[[package]]
name = "zipp"
version = "3.4.1"
description = "Backport of pathlib-compatible object wrapper for zip files"
category = "dev"
optional = false
python-versions = ">=3.6"
[package.extras]
docs = ["sphinx", "jaraco.packaging (>=8.2)", "rst.linker (>=1.9)"]
testing = ["pytest (>=4.6)", "pytest-checkdocs (>=1.2.3)", "pytest-flake8", "pytest-cov", "pytest-enabler", "jaraco.itertools", "func-timeout", "pytest-black (>=0.3.7)", "pytest-mypy"]
[metadata]
lock-version = "1.1"
python-versions = "^3.6"
content-hash = "f526837d3cce386db46118b1044839c60e52deafb740bf410c3cf75f0648987e"
[metadata.files]
anyio = [
{file = "anyio-1.4.0-py3-none-any.whl", hash = "sha256:9ee67e8131853f42957e214d4531cee6f2b66dda164a298d9686a768b7161a4f"},
{file = "anyio-1.4.0.tar.gz", hash = "sha256:95f60964fc4583f3f226f8dc275dfb02aefe7b39b85a999c6d14f4ec5323c1d8"},
]
appdirs = [
{file = "appdirs-1.4.4-py2.py3-none-any.whl", hash = "sha256:a841dacd6b99318a741b166adb07e19ee71a274450e68237b4650ca1055ab128"},
{file = "appdirs-1.4.4.tar.gz", hash = "sha256:7d5d0167b2b1ba821647616af46a749d1c653740dd0d2415100fe26e27afdf41"},
]
asks = [
{file = "asks-2.4.10.tar.gz", hash = "sha256:c9db16bdf9fed8cae76db3b4365216ea2f1563b8ab9fe9a5e8e554177de61192"},
]
async-generator = [
{file = "async_generator-1.10-py3-none-any.whl", hash = "sha256:01c7bf666359b4967d2cda0000cc2e4af16a0ae098cbffcb8472fb9e8ad6585b"},
{file = "async_generator-1.10.tar.gz", hash = "sha256:6ebb3d106c12920aaae42ccb6f787ef5eefdcdd166ea3d628fa8476abe712144"},
]
attrs = [
{file = "attrs-20.3.0-py2.py3-none-any.whl", hash = "sha256:31b2eced602aa8423c2aea9c76a724617ed67cf9513173fd3a4f03e3a929c7e6"},
{file = "attrs-20.3.0.tar.gz", hash = "sha256:832aa3cde19744e49938b91fea06d69ecb9e649c93ba974535d08ad92164f700"},
]
black = [
{file = "black-19.10b0-py36-none-any.whl", hash = "sha256:1b30e59be925fafc1ee4565e5e08abef6b03fe455102883820fe5ee2e4734e0b"},
{file = "black-19.10b0.tar.gz", hash = "sha256:c2edb73a08e9e0e6f65a0e6af18b059b8b1cdd5bef997d7a0b181df93dc81539"},
]
certifi = [
{file = "certifi-2020.12.5-py2.py3-none-any.whl", hash = "sha256:719a74fb9e33b9bd44cc7f3a8d94bc35e4049deebe19ba7d8e108280cfd59830"},
{file = "certifi-2020.12.5.tar.gz", hash = "sha256:1a4995114262bffbc2413b159f2a1a480c969de6e6eb13ee966d470af86af59c"},
]
cffi = [
{file = "cffi-1.14.5-cp27-cp27m-macosx_10_9_x86_64.whl", hash = "sha256:bb89f306e5da99f4d922728ddcd6f7fcebb3241fc40edebcb7284d7514741991"},
{file = "cffi-1.14.5-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:34eff4b97f3d982fb93e2831e6750127d1355a923ebaeeb565407b3d2f8d41a1"},
{file = "cffi-1.14.5-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:99cd03ae7988a93dd00bcd9d0b75e1f6c426063d6f03d2f90b89e29b25b82dfa"},
{file = "cffi-1.14.5-cp27-cp27m-win32.whl", hash = "sha256:65fa59693c62cf06e45ddbb822165394a288edce9e276647f0046e1ec26920f3"},
{file = "cffi-1.14.5-cp27-cp27m-win_amd64.whl", hash = "sha256:51182f8927c5af975fece87b1b369f722c570fe169f9880764b1ee3bca8347b5"},
{file = "cffi-1.14.5-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:43e0b9d9e2c9e5d152946b9c5fe062c151614b262fda2e7b201204de0b99e482"},
{file = "cffi-1.14.5-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:cbde590d4faaa07c72bf979734738f328d239913ba3e043b1e98fe9a39f8b2b6"},
{file = "cffi-1.14.5-cp35-cp35m-macosx_10_9_x86_64.whl", hash = "sha256:5de7970188bb46b7bf9858eb6890aad302577a5f6f75091fd7cdd3ef13ef3045"},
{file = "cffi-1.14.5-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:a465da611f6fa124963b91bf432d960a555563efe4ed1cc403ba5077b15370aa"},
{file = "cffi-1.14.5-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:d42b11d692e11b6634f7613ad8df5d6d5f8875f5d48939520d351007b3c13406"},
{file = "cffi-1.14.5-cp35-cp35m-win32.whl", hash = "sha256:72d8d3ef52c208ee1c7b2e341f7d71c6fd3157138abf1a95166e6165dd5d4369"},
{file = "cffi-1.14.5-cp35-cp35m-win_amd64.whl", hash = "sha256:29314480e958fd8aab22e4a58b355b629c59bf5f2ac2492b61e3dc06d8c7a315"},
{file = "cffi-1.14.5-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:3d3dd4c9e559eb172ecf00a2a7517e97d1e96de2a5e610bd9b68cea3925b4892"},
{file = "cffi-1.14.5-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:48e1c69bbacfc3d932221851b39d49e81567a4d4aac3b21258d9c24578280058"},
{file = "cffi-1.14.5-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:69e395c24fc60aad6bb4fa7e583698ea6cc684648e1ffb7fe85e3c1ca131a7d5"},
{file = "cffi-1.14.5-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:9e93e79c2551ff263400e1e4be085a1210e12073a31c2011dbbda14bda0c6132"},
{file = "cffi-1.14.5-cp36-cp36m-win32.whl", hash = "sha256:58e3f59d583d413809d60779492342801d6e82fefb89c86a38e040c16883be53"},
{file = "cffi-1.14.5-cp36-cp36m-win_amd64.whl", hash = "sha256:005a36f41773e148deac64b08f233873a4d0c18b053d37da83f6af4d9087b813"},
{file = "cffi-1.14.5-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:2894f2df484ff56d717bead0a5c2abb6b9d2bf26d6960c4604d5c48bbc30ee73"},
{file = "cffi-1.14.5-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:0857f0ae312d855239a55c81ef453ee8fd24136eaba8e87a2eceba644c0d4c06"},
{file = "cffi-1.14.5-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:cd2868886d547469123fadc46eac7ea5253ea7fcb139f12e1dfc2bbd406427d1"},
{file = "cffi-1.14.5-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:35f27e6eb43380fa080dccf676dece30bef72e4a67617ffda586641cd4508d49"},
{file = "cffi-1.14.5-cp37-cp37m-win32.whl", hash = "sha256:9ff227395193126d82e60319a673a037d5de84633f11279e336f9c0f189ecc62"},
{file = "cffi-1.14.5-cp37-cp37m-win_amd64.whl", hash = "sha256:9cf8022fb8d07a97c178b02327b284521c7708d7c71a9c9c355c178ac4bbd3d4"},
{file = "cffi-1.14.5-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:8b198cec6c72df5289c05b05b8b0969819783f9418e0409865dac47288d2a053"},
{file = "cffi-1.14.5-cp38-cp38-manylinux1_i686.whl", hash = "sha256:ad17025d226ee5beec591b52800c11680fca3df50b8b29fe51d882576e039ee0"},
{file = "cffi-1.14.5-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:6c97d7350133666fbb5cf4abdc1178c812cb205dc6f41d174a7b0f18fb93337e"},
{file = "cffi-1.14.5-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:8ae6299f6c68de06f136f1f9e69458eae58f1dacf10af5c17353eae03aa0d827"},
{file = "cffi-1.14.5-cp38-cp38-win32.whl", hash = "sha256:b85eb46a81787c50650f2392b9b4ef23e1f126313b9e0e9013b35c15e4288e2e"},
{file = "cffi-1.14.5-cp38-cp38-win_amd64.whl", hash = "sha256:1f436816fc868b098b0d63b8920de7d208c90a67212546d02f84fe78a9c26396"},
{file = "cffi-1.14.5-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:1071534bbbf8cbb31b498d5d9db0f274f2f7a865adca4ae429e147ba40f73dea"},
{file = "cffi-1.14.5-cp39-cp39-manylinux1_i686.whl", hash = "sha256:9de2e279153a443c656f2defd67769e6d1e4163952b3c622dcea5b08a6405322"},
{file = "cffi-1.14.5-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:6e4714cc64f474e4d6e37cfff31a814b509a35cb17de4fb1999907575684479c"},
{file = "cffi-1.14.5-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:158d0d15119b4b7ff6b926536763dc0714313aa59e320ddf787502c70c4d4bee"},
{file = "cffi-1.14.5-cp39-cp39-win32.whl", hash = "sha256:afb29c1ba2e5a3736f1c301d9d0abe3ec8b86957d04ddfa9d7a6a42b9367e396"},
{file = "cffi-1.14.5-cp39-cp39-win_amd64.whl", hash = "sha256:f2d45f97ab6bb54753eab54fffe75aaf3de4ff2341c9daee1987ee1837636f1d"},
{file = "cffi-1.14.5.tar.gz", hash = "sha256:fd78e5fee591709f32ef6edb9a015b4aa1a5022598e36227500c8f4e02328d9c"},
]
chardet = [
{file = "chardet-4.0.0-py2.py3-none-any.whl", hash = "sha256:f864054d66fd9118f2e67044ac8981a54775ec5b67aed0441892edb553d21da5"},
{file = "chardet-4.0.0.tar.gz", hash = "sha256:0d6f53a15db4120f2b08c94f11e7d93d2c911ee118b6b30a04ec3ee8310179fa"},
]
click = [
{file = "click-7.1.2-py2.py3-none-any.whl", hash = "sha256:dacca89f4bfadd5de3d7489b7c8a566eee0d3676333fbb50030263894c38c0dc"},
{file = "click-7.1.2.tar.gz", hash = "sha256:d2b5255c7c6349bc1bd1e59e08cd12acbbd63ce649f2588755783aa94dfb6b1a"},
]
contextvars = [
{file = "contextvars-2.4.tar.gz", hash = "sha256:f38c908aaa59c14335eeea12abea5f443646216c4e29380d7bf34d2018e2c39e"},
]
flake8 = [
{file = "flake8-3.9.0-py2.py3-none-any.whl", hash = "sha256:12d05ab02614b6aee8df7c36b97d1a3b2372761222b19b58621355e82acddcff"},
{file = "flake8-3.9.0.tar.gz", hash = "sha256:78873e372b12b093da7b5e5ed302e8ad9e988b38b063b61ad937f26ca58fc5f0"},
]
h11 = [
{file = "h11-0.12.0-py3-none-any.whl", hash = "sha256:36a3cb8c0a032f56e2da7084577878a035d3b61d104230d4bd49c0c6b555a9c6"},
{file = "h11-0.12.0.tar.gz", hash = "sha256:47222cb6067e4a307d535814917cd98fd0a57b6788ce715755fa2b6c28b56042"},
]
html5lib = [
{file = "html5lib-1.1-py2.py3-none-any.whl", hash = "sha256:0d78f8fde1c230e99fe37986a60526d7049ed4bf8a9fadbad5f00e22e58e041d"},
{file = "html5lib-1.1.tar.gz", hash = "sha256:b2e5b40261e20f354d198eae92afc10d750afb487ed5e50f9c4eaf07c184146f"},
]
idna = [
{file = "idna-2.10-py2.py3-none-any.whl", hash = "sha256:b97d804b1e9b523befed77c48dacec60e6dcb0b5391d57af6a65a312a90648c0"},
{file = "idna-2.10.tar.gz", hash = "sha256:b307872f855b18632ce0c21c5e45be78c0ea7ae4c15c828c20788b26921eb3f6"},
]
immutables = [
{file = "immutables-0.15-cp35-cp35m-macosx_10_14_x86_64.whl", hash = "sha256:6728f4392e3e8e64b593a5a0cd910a1278f07f879795517e09f308daed138631"},
{file = "immutables-0.15-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:f0836cd3bdc37c8a77b192bbe5f41dbcc3ce654db048ebbba89bdfe6db7a1c7a"},
{file = "immutables-0.15-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:8703d8abfd8687932f2a05f38e7de270c3a6ca3bd1c1efb3c938656b3f2f985a"},
{file = "immutables-0.15-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:b8ad986f9b532c026f19585289384b0769188fcb68b37c7f0bd0df9092a6ca54"},
{file = "immutables-0.15-cp36-cp36m-win_amd64.whl", hash = "sha256:6f117d9206165b9dab8fd81c5129db757d1a044953f438654236ed9a7a4224ae"},
{file = "immutables-0.15-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:b75ade826920c4e490b1bb14cf967ac14e61eb7c5562161c5d7337d61962c226"},
{file = "immutables-0.15-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:b7e13c061785e34f73c4f659861f1b3e4a5fd918e4395c84b21c4e3d449ebe27"},
{file = "immutables-0.15-cp37-cp37m-win_amd64.whl", hash = "sha256:3035849accee4f4e510ed7c94366a40e0f5fef9069fbe04a35f4787b13610a4a"},
{file = "immutables-0.15-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:b04fa69174e0c8f815f9c55f2a43fc9e5a68452fab459a08e904a74e8471639f"},
{file = "immutables-0.15-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:141c2e9ea515a3a815007a429f0b47a578ebeb42c831edaec882a245a35fffca"},
{file = "immutables-0.15-cp38-cp38-win_amd64.whl", hash = "sha256:cbe8c64640637faa5535d539421b293327f119c31507c33ca880bd4f16035eb6"},
{file = "immutables-0.15-cp39-cp39-macosx_10_14_x86_64.whl", hash = "sha256:a0a4e4417d5ef4812d7f99470cd39347b58cb927365dd2b8da9161040d260db0"},
{file = "immutables-0.15-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:3b15c08c71c59e5b7c2470ef949d49ff9f4263bb77f488422eaa157da84d6999"},
{file = "immutables-0.15-cp39-cp39-win_amd64.whl", hash = "sha256:2283a93c151566e6830aee0e5bee55fc273455503b43aa004356b50f9182092b"},
{file = "immutables-0.15.tar.gz", hash = "sha256:3713ab1ebbb6946b7ce1387bb9d1d7f5e09c45add58c2a2ee65f963c171e746b"},
]
importlib-metadata = [
{file = "importlib_metadata-3.7.3-py3-none-any.whl", hash = "sha256:b74159469b464a99cb8cc3e21973e4d96e05d3024d337313fedb618a6e86e6f4"},
{file = "importlib_metadata-3.7.3.tar.gz", hash = "sha256:742add720a20d0467df2f444ae41704000f50e1234f46174b51f9c6031a1bd71"},
]
isort = [
{file = "isort-5.7.0-py3-none-any.whl", hash = "sha256:fff4f0c04e1825522ce6949973e83110a6e907750cd92d128b0d14aaaadbffdc"},
{file = "isort-5.7.0.tar.gz", hash = "sha256:c729845434366216d320e936b8ad6f9d681aab72dc7cbc2d51bedc3582f3ad1e"},
]
jinja2 = [
{file = "Jinja2-2.11.3-py2.py3-none-any.whl", hash = "sha256:03e47ad063331dd6a3f04a43eddca8a966a26ba0c5b7207a9a9e4e08f1b29419"},
{file = "Jinja2-2.11.3.tar.gz", hash = "sha256:a6d58433de0ae800347cab1fa3043cebbabe8baa9d29e668f1c768cb87a333c6"},
]
markupsafe = [
{file = "MarkupSafe-1.1.1-cp27-cp27m-macosx_10_6_intel.whl", hash = "sha256:09027a7803a62ca78792ad89403b1b7a73a01c8cb65909cd876f7fcebd79b161"},
{file = "MarkupSafe-1.1.1-cp27-cp27m-manylinux1_i686.whl", hash = "sha256:e249096428b3ae81b08327a63a485ad0878de3fb939049038579ac0ef61e17e7"},
{file = "MarkupSafe-1.1.1-cp27-cp27m-manylinux1_x86_64.whl", hash = "sha256:500d4957e52ddc3351cabf489e79c91c17f6e0899158447047588650b5e69183"},
{file = "MarkupSafe-1.1.1-cp27-cp27m-win32.whl", hash = "sha256:b2051432115498d3562c084a49bba65d97cf251f5a331c64a12ee7e04dacc51b"},
{file = "MarkupSafe-1.1.1-cp27-cp27m-win_amd64.whl", hash = "sha256:98c7086708b163d425c67c7a91bad6e466bb99d797aa64f965e9d25c12111a5e"},
{file = "MarkupSafe-1.1.1-cp27-cp27mu-manylinux1_i686.whl", hash = "sha256:cd5df75523866410809ca100dc9681e301e3c27567cf498077e8551b6d20e42f"},
{file = "MarkupSafe-1.1.1-cp27-cp27mu-manylinux1_x86_64.whl", hash = "sha256:43a55c2930bbc139570ac2452adf3d70cdbb3cfe5912c71cdce1c2c6bbd9c5d1"},
{file = "MarkupSafe-1.1.1-cp34-cp34m-macosx_10_6_intel.whl", hash = "sha256:1027c282dad077d0bae18be6794e6b6b8c91d58ed8a8d89a89d59693b9131db5"},
{file = "MarkupSafe-1.1.1-cp34-cp34m-manylinux1_i686.whl", hash = "sha256:62fe6c95e3ec8a7fad637b7f3d372c15ec1caa01ab47926cfdf7a75b40e0eac1"},
{file = "MarkupSafe-1.1.1-cp34-cp34m-manylinux1_x86_64.whl", hash = "sha256:88e5fcfb52ee7b911e8bb6d6aa2fd21fbecc674eadd44118a9cc3863f938e735"},
{file = "MarkupSafe-1.1.1-cp34-cp34m-win32.whl", hash = "sha256:ade5e387d2ad0d7ebf59146cc00c8044acbd863725f887353a10df825fc8ae21"},
{file = "MarkupSafe-1.1.1-cp34-cp34m-win_amd64.whl", hash = "sha256:09c4b7f37d6c648cb13f9230d847adf22f8171b1ccc4d5682398e77f40309235"},
{file = "MarkupSafe-1.1.1-cp35-cp35m-macosx_10_6_intel.whl", hash = "sha256:79855e1c5b8da654cf486b830bd42c06e8780cea587384cf6545b7d9ac013a0b"},
{file = "MarkupSafe-1.1.1-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:c8716a48d94b06bb3b2524c2b77e055fb313aeb4ea620c8dd03a105574ba704f"},
{file = "MarkupSafe-1.1.1-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:7c1699dfe0cf8ff607dbdcc1e9b9af1755371f92a68f706051cc8c37d447c905"},
{file = "MarkupSafe-1.1.1-cp35-cp35m-win32.whl", hash = "sha256:6dd73240d2af64df90aa7c4e7481e23825ea70af4b4922f8ede5b9e35f78a3b1"},
{file = "MarkupSafe-1.1.1-cp35-cp35m-win_amd64.whl", hash = "sha256:9add70b36c5666a2ed02b43b335fe19002ee5235efd4b8a89bfcf9005bebac0d"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-macosx_10_6_intel.whl", hash = "sha256:24982cc2533820871eba85ba648cd53d8623687ff11cbb805be4ff7b4c971aff"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:d53bc011414228441014aa71dbec320c66468c1030aae3a6e29778a3382d96e5"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:00bc623926325b26bb9605ae9eae8a215691f33cae5df11ca5424f06f2d1f473"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:717ba8fe3ae9cc0006d7c451f0bb265ee07739daf76355d06366154ee68d221e"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:3b8a6499709d29c2e2399569d96719a1b21dcd94410a586a18526b143ec8470f"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:84dee80c15f1b560d55bcfe6d47b27d070b4681c699c572af2e3c7cc90a3b8e0"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:b1dba4527182c95a0db8b6060cc98ac49b9e2f5e64320e2b56e47cb2831978c7"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-win32.whl", hash = "sha256:535f6fc4d397c1563d08b88e485c3496cf5784e927af890fb3c3aac7f933ec66"},
{file = "MarkupSafe-1.1.1-cp36-cp36m-win_amd64.whl", hash = "sha256:b1282f8c00509d99fef04d8ba936b156d419be841854fe901d8ae224c59f0be5"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-macosx_10_6_intel.whl", hash = "sha256:8defac2f2ccd6805ebf65f5eeb132adcf2ab57aa11fdf4c0dd5169a004710e7d"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:bf5aa3cbcfdf57fa2ee9cd1822c862ef23037f5c832ad09cfea57fa846dec193"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:46c99d2de99945ec5cb54f23c8cd5689f6d7177305ebff350a58ce5f8de1669e"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:ba59edeaa2fc6114428f1637ffff42da1e311e29382d81b339c1817d37ec93c6"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:6fffc775d90dcc9aed1b89219549b329a9250d918fd0b8fa8d93d154918422e1"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:a6a744282b7718a2a62d2ed9d993cad6f5f585605ad352c11de459f4108df0a1"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:195d7d2c4fbb0ee8139a6cf67194f3973a6b3042d742ebe0a9ed36d8b6f0c07f"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-win32.whl", hash = "sha256:b00c1de48212e4cc9603895652c5c410df699856a2853135b3967591e4beebc2"},
{file = "MarkupSafe-1.1.1-cp37-cp37m-win_amd64.whl", hash = "sha256:9bf40443012702a1d2070043cb6291650a0841ece432556f784f004937f0f32c"},
{file = "MarkupSafe-1.1.1-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6788b695d50a51edb699cb55e35487e430fa21f1ed838122d722e0ff0ac5ba15"},
{file = "MarkupSafe-1.1.1-cp38-cp38-manylinux1_i686.whl", hash = "sha256:cdb132fc825c38e1aeec2c8aa9338310d29d337bebbd7baa06889d09a60a1fa2"},
{file = "MarkupSafe-1.1.1-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:13d3144e1e340870b25e7b10b98d779608c02016d5184cfb9927a9f10c689f42"},
{file = "MarkupSafe-1.1.1-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:acf08ac40292838b3cbbb06cfe9b2cb9ec78fce8baca31ddb87aaac2e2dc3bc2"},
{file = "MarkupSafe-1.1.1-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:d9be0ba6c527163cbed5e0857c451fcd092ce83947944d6c14bc95441203f032"},
{file = "MarkupSafe-1.1.1-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:caabedc8323f1e93231b52fc32bdcde6db817623d33e100708d9a68e1f53b26b"},
{file = "MarkupSafe-1.1.1-cp38-cp38-win32.whl", hash = "sha256:596510de112c685489095da617b5bcbbac7dd6384aeebeda4df6025d0256a81b"},
{file = "MarkupSafe-1.1.1-cp38-cp38-win_amd64.whl", hash = "sha256:e8313f01ba26fbbe36c7be1966a7b7424942f670f38e666995b88d012765b9be"},
{file = "MarkupSafe-1.1.1-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:d73a845f227b0bfe8a7455ee623525ee656a9e2e749e4742706d80a6065d5e2c"},
{file = "MarkupSafe-1.1.1-cp39-cp39-manylinux1_i686.whl", hash = "sha256:98bae9582248d6cf62321dcb52aaf5d9adf0bad3b40582925ef7c7f0ed85fceb"},
{file = "MarkupSafe-1.1.1-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:2beec1e0de6924ea551859edb9e7679da6e4870d32cb766240ce17e0a0ba2014"},
{file = "MarkupSafe-1.1.1-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:7fed13866cf14bba33e7176717346713881f56d9d2bcebab207f7a036f41b850"},
{file = "MarkupSafe-1.1.1-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:6f1e273a344928347c1290119b493a1f0303c52f5a5eae5f16d74f48c15d4a85"},
{file = "MarkupSafe-1.1.1-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:feb7b34d6325451ef96bc0e36e1a6c0c1c64bc1fbec4b854f4529e51887b1621"},
{file = "MarkupSafe-1.1.1-cp39-cp39-win32.whl", hash = "sha256:22c178a091fc6630d0d045bdb5992d2dfe14e3259760e713c490da5323866c39"},
{file = "MarkupSafe-1.1.1-cp39-cp39-win_amd64.whl", hash = "sha256:b7d644ddb4dbd407d31ffb699f1d140bc35478da613b441c582aeb7c43838dd8"},
{file = "MarkupSafe-1.1.1.tar.gz", hash = "sha256:29872e92839765e546828bb7754a68c418d927cd064fd4708fab9fe9c8bb116b"},
]
mccabe = [
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},
{file = "mccabe-0.6.1.tar.gz", hash = "sha256:dd8d182285a0fe56bace7f45b5e7d1a6ebcbf524e8f3bd87eb0f125271b8831f"},
]
mypy = [
{file = "mypy-0.782-cp35-cp35m-macosx_10_6_x86_64.whl", hash = "sha256:2c6cde8aa3426c1682d35190b59b71f661237d74b053822ea3d748e2c9578a7c"},
{file = "mypy-0.782-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:9c7a9a7ceb2871ba4bac1cf7217a7dd9ccd44c27c2950edbc6dc08530f32ad4e"},
{file = "mypy-0.782-cp35-cp35m-win_amd64.whl", hash = "sha256:c05b9e4fb1d8a41d41dec8786c94f3b95d3c5f528298d769eb8e73d293abc48d"},
{file = "mypy-0.782-cp36-cp36m-macosx_10_6_x86_64.whl", hash = "sha256:6731603dfe0ce4352c555c6284c6db0dc935b685e9ce2e4cf220abe1e14386fd"},
{file = "mypy-0.782-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:f05644db6779387ccdb468cc47a44b4356fc2ffa9287135d05b70a98dc83b89a"},
{file = "mypy-0.782-cp36-cp36m-win_amd64.whl", hash = "sha256:b7fbfabdbcc78c4f6fc4712544b9b0d6bf171069c6e0e3cb82440dd10ced3406"},
{file = "mypy-0.782-cp37-cp37m-macosx_10_6_x86_64.whl", hash = "sha256:3fdda71c067d3ddfb21da4b80e2686b71e9e5c72cca65fa216d207a358827f86"},
{file = "mypy-0.782-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:d7df6eddb6054d21ca4d3c6249cae5578cb4602951fd2b6ee2f5510ffb098707"},
{file = "mypy-0.782-cp37-cp37m-win_amd64.whl", hash = "sha256:a4a2cbcfc4cbf45cd126f531dedda8485671545b43107ded25ce952aac6fb308"},
{file = "mypy-0.782-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6bb93479caa6619d21d6e7160c552c1193f6952f0668cdda2f851156e85186fc"},
{file = "mypy-0.782-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:81c7908b94239c4010e16642c9102bfc958ab14e36048fa77d0be3289dda76ea"},
{file = "mypy-0.782-cp38-cp38-win_amd64.whl", hash = "sha256:5dd13ff1f2a97f94540fd37a49e5d255950ebcdf446fb597463a40d0df3fac8b"},
{file = "mypy-0.782-py3-none-any.whl", hash = "sha256:e0b61738ab504e656d1fe4ff0c0601387a5489ca122d55390ade31f9ca0e252d"},
{file = "mypy-0.782.tar.gz", hash = "sha256:eff7d4a85e9eea55afa34888dfeaccde99e7520b51f867ac28a48492c0b1130c"},
]
mypy-extensions = [
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
]
outcome = [
{file = "outcome-1.1.0-py2.py3-none-any.whl", hash = "sha256:c7dd9375cfd3c12db9801d080a3b63d4b0a261aa996c4c13152380587288d958"},
{file = "outcome-1.1.0.tar.gz", hash = "sha256:e862f01d4e626e63e8f92c38d1f8d5546d3f9cce989263c521b2e7990d186967"},
]
pathspec = [
{file = "pathspec-0.8.1-py2.py3-none-any.whl", hash = "sha256:aa0cb481c4041bf52ffa7b0d8fa6cd3e88a2ca4879c533c9153882ee2556790d"},
{file = "pathspec-0.8.1.tar.gz", hash = "sha256:86379d6b86d75816baba717e64b1a3a3469deb93bb76d613c9ce79edc5cb68fd"},
]
pycodestyle = [
{file = "pycodestyle-2.7.0-py2.py3-none-any.whl", hash = "sha256:514f76d918fcc0b55c6680472f0a37970994e07bbb80725808c17089be302068"},
{file = "pycodestyle-2.7.0.tar.gz", hash = "sha256:c389c1d06bf7904078ca03399a4816f974a1d590090fecea0c63ec26ebaf1cef"},
]
pycparser = [
{file = "pycparser-2.20-py2.py3-none-any.whl", hash = "sha256:7582ad22678f0fcd81102833f60ef8d0e57288b6b5fb00323d101be910e35705"},
{file = "pycparser-2.20.tar.gz", hash = "sha256:2d475327684562c3a96cc71adf7dc8c4f0565175cf86b6d7a404ff4c771f15f0"},
]
pyflakes = [
{file = "pyflakes-2.3.0-py2.py3-none-any.whl", hash = "sha256:910208209dcea632721cb58363d0f72913d9e8cf64dc6f8ae2e02a3609aba40d"},
{file = "pyflakes-2.3.0.tar.gz", hash = "sha256:e59fd8e750e588358f1b8885e5a4751203a0516e0ee6d34811089ac294c8806f"},
]
pypandoc = [
{file = "pypandoc-1.5.tar.gz", hash = "sha256:14a49977ab1fbc9b14ef3087dcb101f336851837fca55ca79cf33846cc4976ff"},
]
python-dateutil = [
{file = "python-dateutil-2.8.1.tar.gz", hash = "sha256:73ebfe9dbf22e832286dafa60473e4cd239f8592f699aa5adaf10050e6e1823c"},
{file = "python_dateutil-2.8.1-py2.py3-none-any.whl", hash = "sha256:75bb3f31ea686f1197762692a9ee6a7550b59fc6ca3a1f4b5d7e32fb98e2da2a"},
]
regex = [
{file = "regex-2021.3.17-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:b97ec5d299c10d96617cc851b2e0f81ba5d9d6248413cd374ef7f3a8871ee4a6"},
{file = "regex-2021.3.17-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:cb4ee827857a5ad9b8ae34d3c8cc51151cb4a3fe082c12ec20ec73e63cc7c6f0"},
{file = "regex-2021.3.17-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:633497504e2a485a70a3268d4fc403fe3063a50a50eed1039083e9471ad0101c"},
{file = "regex-2021.3.17-cp36-cp36m-manylinux2010_i686.whl", hash = "sha256:a59a2ee329b3de764b21495d78c92ab00b4ea79acef0f7ae8c1067f773570afa"},
{file = "regex-2021.3.17-cp36-cp36m-manylinux2010_x86_64.whl", hash = "sha256:f85d6f41e34f6a2d1607e312820971872944f1661a73d33e1e82d35ea3305e14"},
{file = "regex-2021.3.17-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:4651f839dbde0816798e698626af6a2469eee6d9964824bb5386091255a1694f"},
{file = "regex-2021.3.17-cp36-cp36m-manylinux2014_i686.whl", hash = "sha256:39c44532d0e4f1639a89e52355b949573e1e2c5116106a395642cbbae0ff9bcd"},
{file = "regex-2021.3.17-cp36-cp36m-manylinux2014_x86_64.whl", hash = "sha256:3d9a7e215e02bd7646a91fb8bcba30bc55fd42a719d6b35cf80e5bae31d9134e"},
{file = "regex-2021.3.17-cp36-cp36m-win32.whl", hash = "sha256:159fac1a4731409c830d32913f13f68346d6b8e39650ed5d704a9ce2f9ef9cb3"},
{file = "regex-2021.3.17-cp36-cp36m-win_amd64.whl", hash = "sha256:13f50969028e81765ed2a1c5fcfdc246c245cf8d47986d5172e82ab1a0c42ee5"},
{file = "regex-2021.3.17-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:b9d8d286c53fe0cbc6d20bf3d583cabcd1499d89034524e3b94c93a5ab85ca90"},
{file = "regex-2021.3.17-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:201e2619a77b21a7780580ab7b5ce43835e242d3e20fef50f66a8df0542e437f"},
{file = "regex-2021.3.17-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:d47d359545b0ccad29d572ecd52c9da945de7cd6cf9c0cfcb0269f76d3555689"},
{file = "regex-2021.3.17-cp37-cp37m-manylinux2010_i686.whl", hash = "sha256:ea2f41445852c660ba7c3ebf7d70b3779b20d9ca8ba54485a17740db49f46932"},
{file = "regex-2021.3.17-cp37-cp37m-manylinux2010_x86_64.whl", hash = "sha256:486a5f8e11e1f5bbfcad87f7c7745eb14796642323e7e1829a331f87a713daaa"},
{file = "regex-2021.3.17-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:18e25e0afe1cf0f62781a150c1454b2113785401ba285c745acf10c8ca8917df"},
{file = "regex-2021.3.17-cp37-cp37m-manylinux2014_i686.whl", hash = "sha256:a2ee026f4156789df8644d23ef423e6194fad0bc53575534101bb1de5d67e8ce"},
{file = "regex-2021.3.17-cp37-cp37m-manylinux2014_x86_64.whl", hash = "sha256:4c0788010a93ace8a174d73e7c6c9d3e6e3b7ad99a453c8ee8c975ddd9965643"},
{file = "regex-2021.3.17-cp37-cp37m-win32.whl", hash = "sha256:575a832e09d237ae5fedb825a7a5bc6a116090dd57d6417d4f3b75121c73e3be"},
{file = "regex-2021.3.17-cp37-cp37m-win_amd64.whl", hash = "sha256:8e65e3e4c6feadf6770e2ad89ad3deb524bcb03d8dc679f381d0568c024e0deb"},
{file = "regex-2021.3.17-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a0df9a0ad2aad49ea3c7f65edd2ffb3d5c59589b85992a6006354f6fb109bb18"},
{file = "regex-2021.3.17-cp38-cp38-manylinux1_i686.whl", hash = "sha256:b98bc9db003f1079caf07b610377ed1ac2e2c11acc2bea4892e28cc5b509d8d5"},
{file = "regex-2021.3.17-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:808404898e9a765e4058bf3d7607d0629000e0a14a6782ccbb089296b76fa8fe"},
{file = "regex-2021.3.17-cp38-cp38-manylinux2010_i686.whl", hash = "sha256:5770a51180d85ea468234bc7987f5597803a4c3d7463e7323322fe4a1b181578"},
{file = "regex-2021.3.17-cp38-cp38-manylinux2010_x86_64.whl", hash = "sha256:976a54d44fd043d958a69b18705a910a8376196c6b6ee5f2596ffc11bff4420d"},
{file = "regex-2021.3.17-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:63f3ca8451e5ff7133ffbec9eda641aeab2001be1a01878990f6c87e3c44b9d5"},
{file = "regex-2021.3.17-cp38-cp38-manylinux2014_i686.whl", hash = "sha256:bcd945175c29a672f13fce13a11893556cd440e37c1b643d6eeab1988c8b209c"},
{file = "regex-2021.3.17-cp38-cp38-manylinux2014_x86_64.whl", hash = "sha256:3d9356add82cff75413bec360c1eca3e58db4a9f5dafa1f19650958a81e3249d"},
{file = "regex-2021.3.17-cp38-cp38-win32.whl", hash = "sha256:f5d0c921c99297354cecc5a416ee4280bd3f20fd81b9fb671ca6be71499c3fdf"},
{file = "regex-2021.3.17-cp38-cp38-win_amd64.whl", hash = "sha256:14de88eda0976020528efc92d0a1f8830e2fb0de2ae6005a6fc4e062553031fa"},
{file = "regex-2021.3.17-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:4c2e364491406b7888c2ad4428245fc56c327e34a5dfe58fd40df272b3c3dab3"},
{file = "regex-2021.3.17-cp39-cp39-manylinux1_i686.whl", hash = "sha256:8bd4f91f3fb1c9b1380d6894bd5b4a519409135bec14c0c80151e58394a4e88a"},
{file = "regex-2021.3.17-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:882f53afe31ef0425b405a3f601c0009b44206ea7f55ee1c606aad3cc213a52c"},
{file = "regex-2021.3.17-cp39-cp39-manylinux2010_i686.whl", hash = "sha256:07ef35301b4484bce843831e7039a84e19d8d33b3f8b2f9aab86c376813d0139"},
{file = "regex-2021.3.17-cp39-cp39-manylinux2010_x86_64.whl", hash = "sha256:360a01b5fa2ad35b3113ae0c07fb544ad180603fa3b1f074f52d98c1096fa15e"},
{file = "regex-2021.3.17-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:709f65bb2fa9825f09892617d01246002097f8f9b6dde8d1bb4083cf554701ba"},
{file = "regex-2021.3.17-cp39-cp39-manylinux2014_i686.whl", hash = "sha256:c66221e947d7207457f8b6f42b12f613b09efa9669f65a587a2a71f6a0e4d106"},
{file = "regex-2021.3.17-cp39-cp39-manylinux2014_x86_64.whl", hash = "sha256:c782da0e45aff131f0bed6e66fbcfa589ff2862fc719b83a88640daa01a5aff7"},
{file = "regex-2021.3.17-cp39-cp39-win32.whl", hash = "sha256:dc9963aacb7da5177e40874585d7407c0f93fb9d7518ec58b86e562f633f36cd"},
{file = "regex-2021.3.17-cp39-cp39-win_amd64.whl", hash = "sha256:a0d04128e005142260de3733591ddf476e4902c0c23c1af237d9acf3c96e1b38"},
{file = "regex-2021.3.17.tar.gz", hash = "sha256:4b8a1fb724904139149a43e172850f35aa6ea97fb0545244dc0b805e0154ed68"},
]
requests = [
{file = "requests-2.25.1-py2.py3-none-any.whl", hash = "sha256:c210084e36a42ae6b9219e00e48287def368a26d03a048ddad7bfee44f75871e"},
{file = "requests-2.25.1.tar.gz", hash = "sha256:27973dd4a904a4f13b263a19c866c13b92a39ed1c964655f025f3f8d3d75b804"},
]
six = [
{file = "six-1.15.0-py2.py3-none-any.whl", hash = "sha256:8b74bedcbbbaca38ff6d7491d76f2b06b3592611af620f8426e82dddb04a5ced"},
{file = "six-1.15.0.tar.gz", hash = "sha256:30639c035cdb23534cd4aa2dd52c3bf48f06e5f4a941509c8bafd8ce11080259"},
]
sniffio = [
{file = "sniffio-1.2.0-py3-none-any.whl", hash = "sha256:471b71698eac1c2112a40ce2752bb2f4a4814c22a54a3eed3676bc0f5ca9f663"},
{file = "sniffio-1.2.0.tar.gz", hash = "sha256:c4666eecec1d3f50960c6bdf61ab7bc350648da6c126e3cf6898d8cd4ddcd3de"},
]
sortedcontainers = [
{file = "sortedcontainers-2.3.0-py2.py3-none-any.whl", hash = "sha256:37257a32add0a3ee490bb170b599e93095eed89a55da91fa9f48753ea12fd73f"},
{file = "sortedcontainers-2.3.0.tar.gz", hash = "sha256:59cc937650cf60d677c16775597c89a960658a09cf7c1a668f86e1e4464b10a1"},
]
toml = [
{file = "toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b"},
{file = "toml-0.10.2.tar.gz", hash = "sha256:b3bda1d108d5dd99f4a20d24d9c348e91c4db7ab1b749200bded2f839ccbe68f"},
]
trio = [
{file = "trio-0.17.0-py3-none-any.whl", hash = "sha256:fc70c74e8736d1105b3c05cc2e49b30c58755733740f9c51ae6d88a4d6d0a291"},
{file = "trio-0.17.0.tar.gz", hash = "sha256:e85cf9858e445465dfbb0e3fdf36efe92082d2df87bfe9d62585eedd6e8e9d7d"},
]
typed-ast = [
{file = "typed_ast-1.4.2-cp35-cp35m-manylinux1_i686.whl", hash = "sha256:7703620125e4fb79b64aa52427ec192822e9f45d37d4b6625ab37ef403e1df70"},
{file = "typed_ast-1.4.2-cp35-cp35m-manylinux1_x86_64.whl", hash = "sha256:c9aadc4924d4b5799112837b226160428524a9a45f830e0d0f184b19e4090487"},
{file = "typed_ast-1.4.2-cp35-cp35m-manylinux2014_aarch64.whl", hash = "sha256:9ec45db0c766f196ae629e509f059ff05fc3148f9ffd28f3cfe75d4afb485412"},
{file = "typed_ast-1.4.2-cp35-cp35m-win32.whl", hash = "sha256:85f95aa97a35bdb2f2f7d10ec5bbdac0aeb9dafdaf88e17492da0504de2e6400"},
{file = "typed_ast-1.4.2-cp35-cp35m-win_amd64.whl", hash = "sha256:9044ef2df88d7f33692ae3f18d3be63dec69c4fb1b5a4a9ac950f9b4ba571606"},
{file = "typed_ast-1.4.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:c1c876fd795b36126f773db9cbb393f19808edd2637e00fd6caba0e25f2c7b64"},
{file = "typed_ast-1.4.2-cp36-cp36m-manylinux1_i686.whl", hash = "sha256:5dcfc2e264bd8a1db8b11a892bd1647154ce03eeba94b461effe68790d8b8e07"},
{file = "typed_ast-1.4.2-cp36-cp36m-manylinux1_x86_64.whl", hash = "sha256:8db0e856712f79c45956da0c9a40ca4246abc3485ae0d7ecc86a20f5e4c09abc"},
{file = "typed_ast-1.4.2-cp36-cp36m-manylinux2014_aarch64.whl", hash = "sha256:d003156bb6a59cda9050e983441b7fa2487f7800d76bdc065566b7d728b4581a"},
{file = "typed_ast-1.4.2-cp36-cp36m-win32.whl", hash = "sha256:4c790331247081ea7c632a76d5b2a265e6d325ecd3179d06e9cf8d46d90dd151"},
{file = "typed_ast-1.4.2-cp36-cp36m-win_amd64.whl", hash = "sha256:d175297e9533d8d37437abc14e8a83cbc68af93cc9c1c59c2c292ec59a0697a3"},
{file = "typed_ast-1.4.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:cf54cfa843f297991b7388c281cb3855d911137223c6b6d2dd82a47ae5125a41"},
{file = "typed_ast-1.4.2-cp37-cp37m-manylinux1_i686.whl", hash = "sha256:b4fcdcfa302538f70929eb7b392f536a237cbe2ed9cba88e3bf5027b39f5f77f"},
{file = "typed_ast-1.4.2-cp37-cp37m-manylinux1_x86_64.whl", hash = "sha256:987f15737aba2ab5f3928c617ccf1ce412e2e321c77ab16ca5a293e7bbffd581"},
{file = "typed_ast-1.4.2-cp37-cp37m-manylinux2014_aarch64.whl", hash = "sha256:37f48d46d733d57cc70fd5f30572d11ab8ed92da6e6b28e024e4a3edfb456e37"},
{file = "typed_ast-1.4.2-cp37-cp37m-win32.whl", hash = "sha256:36d829b31ab67d6fcb30e185ec996e1f72b892255a745d3a82138c97d21ed1cd"},
{file = "typed_ast-1.4.2-cp37-cp37m-win_amd64.whl", hash = "sha256:8368f83e93c7156ccd40e49a783a6a6850ca25b556c0fa0240ed0f659d2fe496"},
{file = "typed_ast-1.4.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:963c80b583b0661918718b095e02303d8078950b26cc00b5e5ea9ababe0de1fc"},
{file = "typed_ast-1.4.2-cp38-cp38-manylinux1_i686.whl", hash = "sha256:e683e409e5c45d5c9082dc1daf13f6374300806240719f95dc783d1fc942af10"},
{file = "typed_ast-1.4.2-cp38-cp38-manylinux1_x86_64.whl", hash = "sha256:84aa6223d71012c68d577c83f4e7db50d11d6b1399a9c779046d75e24bed74ea"},
{file = "typed_ast-1.4.2-cp38-cp38-manylinux2014_aarch64.whl", hash = "sha256:a38878a223bdd37c9709d07cd357bb79f4c760b29210e14ad0fb395294583787"},
{file = "typed_ast-1.4.2-cp38-cp38-win32.whl", hash = "sha256:a2c927c49f2029291fbabd673d51a2180038f8cd5a5b2f290f78c4516be48be2"},
{file = "typed_ast-1.4.2-cp38-cp38-win_amd64.whl", hash = "sha256:c0c74e5579af4b977c8b932f40a5464764b2f86681327410aa028a22d2f54937"},
{file = "typed_ast-1.4.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:07d49388d5bf7e863f7fa2f124b1b1d89d8aa0e2f7812faff0a5658c01c59aa1"},
{file = "typed_ast-1.4.2-cp39-cp39-manylinux1_i686.whl", hash = "sha256:240296b27397e4e37874abb1df2a608a92df85cf3e2a04d0d4d61055c8305ba6"},
{file = "typed_ast-1.4.2-cp39-cp39-manylinux1_x86_64.whl", hash = "sha256:d746a437cdbca200622385305aedd9aef68e8a645e385cc483bdc5e488f07166"},
{file = "typed_ast-1.4.2-cp39-cp39-manylinux2014_aarch64.whl", hash = "sha256:14bf1522cdee369e8f5581238edac09150c765ec1cb33615855889cf33dcb92d"},
{file = "typed_ast-1.4.2-cp39-cp39-win32.whl", hash = "sha256:cc7b98bf58167b7f2db91a4327da24fb93368838eb84a44c472283778fc2446b"},
{file = "typed_ast-1.4.2-cp39-cp39-win_amd64.whl", hash = "sha256:7147e2a76c75f0f64c4319886e7639e490fee87c9d25cb1d4faef1d8cf83a440"},
{file = "typed_ast-1.4.2.tar.gz", hash = "sha256:9fc0b3cb5d1720e7141d103cf4819aea239f7d136acf9ee4a69b047b7986175a"},
]
typing-extensions = [
{file = "typing_extensions-3.7.4.3-py2-none-any.whl", hash = "sha256:dafc7639cde7f1b6e1acc0f457842a83e722ccca8eef5270af2d74792619a89f"},
{file = "typing_extensions-3.7.4.3-py3-none-any.whl", hash = "sha256:7cb407020f00f7bfc3cb3e7881628838e69d8f3fcab2f64742a5e76b2f841918"},
{file = "typing_extensions-3.7.4.3.tar.gz", hash = "sha256:99d4073b617d30288f569d3f13d2bd7548c3a7e4c8de87db09a9d29bb3a4a60c"},
]
urllib3 = [
{file = "urllib3-1.26.4-py2.py3-none-any.whl", hash = "sha256:2f4da4594db7e1e110a944bb1b551fdf4e6c136ad42e4234131391e21eb5b0df"},
{file = "urllib3-1.26.4.tar.gz", hash = "sha256:e7b021f7241115872f92f43c6508082facffbd1c048e3c6e2bb9c2a157e28937"},
]
webencodings = [
{file = "webencodings-0.5.1-py2.py3-none-any.whl", hash = "sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78"},
{file = "webencodings-0.5.1.tar.gz", hash = "sha256:b36a1c245f2d304965eb4e0a82848379241dc04b865afcc4aab16748587e1923"},
]
zipp = [
{file = "zipp-3.4.1-py3-none-any.whl", hash = "sha256:51cb66cc54621609dd593d1787f286ee42a5c0adbb4b29abea5a63edc3e03098"},
{file = "zipp-3.4.1.tar.gz", hash = "sha256:3607921face881ba3e026887d8150cca609d517579abe052ac81fc5aeffdbd76"},
]

View File

@ -1,46 +0,0 @@
[build-system]
requires = ["poetry>=1.0.9,<2.0"]
build-backend = "poetry.masonry.api"
[tool.poetry]
name = "etherpump"
version = "0.0.20"
description = "Pumping text from etherpads into publications"
authors = ["Varia, Center for Everyday Technology"]
maintainers = ["Varia, Center for Everyday Technology <info@varia.zone>"]
license = "GPLv3"
readme = "README.md"
repository = "https://git.vvvvvvaria.org/varia/etherpump"
keywords = ["etherpad", "etherdump", "etherpump"]
[tool.poetry.dependencies]
python = "^3.6"
asks = "^2.4.10"
html5lib = "^1.1"
jinja2 = "^2.11.2"
pypandoc = "^1.5"
python-dateutil = "^2.8.1"
requests = "^2.24.0"
trio = "^0.17.0"
[tool.poetry.dev-dependencies]
black = "^19.10b0"
flake8 = "^3.8.3"
isort = "^5.0.2"
mypy = "^0.782"
[tool.poetry.scripts]
etherpump = "etherpump:main"
[tool.black]
line-length = 80
target-version = ["py38"]
include = '\.pyi?$'
[tool.isort]
include_trailing_comma = true
known_first_party = "abra"
known_third_party = "pytest"
line_length = 80
multi_line_output = 3
skip = ".tox"

36
setup.py Normal file
View File

@ -0,0 +1,36 @@
#!/usr/bin/env python3
import distutils.command.install_lib
from distutils.core import setup
import os
def find (p, d):
ret = []
for b, dd, ff in os.walk(os.path.join(p, d)):
for f in ff:
if not f.startswith("."):
fp = os.path.join(b, f)
ret.append(os.path.relpath(fp, p))
ret.sort()
# for x in ret[:10]:
# print "**", x
return ret
setup(
name='etherpump',
version='0.0.1',
author='Varia members',
author_email='info@varia.zone',
packages=['etherpump', 'etherpump.commands'],
package_dir={'etherpump': 'etherpump'},
#package_data={'activearchives': find("activearchives", "templates/") + find("activearchives", "data/")},
package_data={'etherpump': find("etherpump", "data/")},
scripts=['bin/etherpump'],
url='https://git.vvvvvvaria.org/varia/etherpump',
license='LICENSE.txt',
description='Etherpump an etherpad publishing system',
# long_description=open('README.md').read(),
install_requires=[
"html5lib", "jinja2"
]
)

View File

@ -1,62 +0,0 @@
html {
border: 10px inset magenta;
min-height: calc(100vh - 20px);
min-width: 1000px;
}
body {
margin: 1em;
font-family: monospace;
font-size: 16px;
line-height: 1.3;
background-color: #ffff00a3;
color: green;
}
#welcome {
max-width: 600px;
margin: 1em 0;
}
table {
min-width: 600px;
}
th,
td {
text-align: left;
padding: 0 1em 0 0;
vertical-align: top;
}
td.name {
width: 323px;
}
td.versions {
width: 290px;
}
td.magicwords a {
color: magenta;
}
hr {
border: 0;
border-bottom: 1px solid;
margin: 2em 0 1em;
}
#footer {
max-width: 600px;
}
.info {
font-size: smaller;
}
.highlight {
padding: 0.5em;
background-color: rgb(255, 192, 203, 0.8);
}
.magic {
margin-top: 2em;
}
.magicwords {
padding-right: 5px;
}
.magicwords-publish {
padding-right: 5px;
display: inline;
color: magenta;
opacity: 0.4;
}

View File

@ -1,132 +0,0 @@
<!DOCTYPE html>
<html lang="{{ language }}">
<head>
<meta charset="utf-8" />
<title>{{ title }}</title>
<link rel="stylesheet" type="text/css" href="{% block css %}stylesheet.css{% endblock %}">
<!--<link rel="alternate" type="application/rss+xml" href="recentchanges.rss">-->
{% block scripts %}
{% endblock scripts %}
</head>
<body>
{% set padoftheday = pads | random %}
<h1>{{ title }}</h1>
<div id="welcome">
Welcome! The pages below have been deliberately published by their authors in
order to share their thoughts, research and process in an early form. This
page represents one of Varia's low-effort publishing tools. The pages are all
produced through Varia's <a href="https://pad.vvvvvvaria.org/">Etherpad instance</a>.
<br>
<br>
Etherpad is used as collaborative writing tool to take notes, create readers,
coordinate projects and document gatherings that happen in and around Varia.
For example <a href="{{ padoftheday.link }}">{{ padoftheday.padid }}</a>.
<br>
<br>
This index is updated every 60 minutes.
</div>
<table>
<thead>
<tr>
<th>name</th>
<th>versions</th>
<!--<th>last edited</th>-->
<!--<th>revisions</th>-->
<!--<th>authors</th>-->
</tr>
</thead>
<tbody>
{% set allmagicwords = [] %}
{% for pad in pads %}
<tr>
<td class="name">
<a href="{{ pad.link }}">{{ pad.padid }}</a>
</td>
<td class="versions">
{% for v in pad.versions %}<a href="{{ v.url }}">{{ v.type }}</a> {% endfor %}
</td>
<!-- WOW -->
<td class="magicwords">
{% for magicword in pad.magicwords | sort %}
{% if magicword == "__PUBLISH__" %}
<p class="magicwords-publish">{{magicword}}</p>
{% else %}
<a class="magicwords" href="#{{ magicword }}">{{ magicword }}</a>
{% endif %}
{% if magicword %}
<div style="display:none;">{{ allmagicwords.append(magicword) }}</div>
{% endif %}
{% endfor %}
</td>
<!--<td class="lastedited">{{ pad.lastedited_iso|datetimeformat }}</td>-->
<!--<td class="revisions">{{ pad.revisions }}</td>-->
<!--<td class="authors">{{ pad.author_ids|length }}</td>-->
</tr>
{% endfor %}
</tbody>
</table>
<div id="magicarea">
{% for magicword in allmagicwords | unique | sort %}
{% if magicword != "__PUBLISH__" %}
<div class="magic" id="{{magicword}}">
<h2>{{ magicword }}</h2>
<table>
<thead>
<tr>
<th>name</th>
<th>versions</th>
</tr>
</thead>
<tbody>
{% for pad in pads %}
{% if magicword in pad.magicwords %}
<tr>
<td class="name">
<a href="{{ pad.link }}">{{ pad.padid }}</a>
</td>
<td class="versions">
{% for v in pad.versions %}<a href="{{ v.url }}">{{ v.type }}</a> {% endfor %}
</td>
<!-- WOW -->
<td class="magicwords">
{% for magicword in pad.magicwords | sort %}
{% if magicword == "__PUBLISH__" %}
<p class="magicwords-publish">{{magicword}}</p>
{% else %}
<a class="magicwords" href="#{{ magicword }}">{{ magicword }}</a>
{% endif %}
{% endfor %}
</td>
</tr>
{% endif %}
{% endfor %}
</tbody>
</table>
</div>
{% endif %}
{% endfor %}
</div>
<div id="footer">
<hr>
<p>
<small>
This page is generated using <a href="https://git.vvvvvvaria.org/varia/etherpump">Etherpump</a>.
It is a command-line utility and python library that extends the multi
writing and publishing functionalities of the Etherpad.
</small>
<br><br>
</p>
{% block info %}<p class="info">Last updated {{ timestamp }}.</p>{% endblock %}
</div>
</body>
</html>