forked from mystiq/hydrogen-web
add notes and prototypes for sending, etc
This commit is contained in:
parent
20fa1448fd
commit
422cca746b
8 changed files with 355 additions and 11 deletions
|
@ -28,7 +28,7 @@
|
||||||
- DONE: turn ObservableArray into ObservableSortedArray
|
- DONE: turn ObservableArray into ObservableSortedArray
|
||||||
- upsert already sorted sections
|
- upsert already sorted sections
|
||||||
- DONE: upsert single entry
|
- DONE: upsert single entry
|
||||||
- adapt TilesCollection & Tile to entry changes
|
- DONE: adapt TilesCollection & Tile to entry changes
|
||||||
|
|
||||||
- add live fragment id optimization if we haven't done so already
|
- add live fragment id optimization if we haven't done so already
|
||||||
- lets try to not have to have the fragmentindex in memory if the timeline isn't loaded
|
- lets try to not have to have the fragmentindex in memory if the timeline isn't loaded
|
||||||
|
|
10
doc/GOAL.md
10
doc/GOAL.md
|
@ -1,11 +1,5 @@
|
||||||
goal:
|
goal:
|
||||||
|
|
||||||
to write a minimal matrix client that should you all your rooms, allows you to pick one and read and write messages in it.
|
write client that works on lumia 950 phone, so I can use matrix on my phone.
|
||||||
|
|
||||||
on the technical side, the goal is to go low-memory, and test the performance of storing every event individually in indexeddb.
|
try approach offline to indexeddb. go low-memory, and test the performance of storing every event individually in indexeddb.
|
||||||
|
|
||||||
nice properties of this approach:
|
|
||||||
|
|
||||||
easy to delete oldest events when db becomes certain size/full (do we need new pagination token after deleting oldest? how to do that)
|
|
||||||
|
|
||||||
sync is persisted in one transaction, so you always have state at some sync_token
|
|
||||||
|
|
8
doc/RELATIONS.md
Normal file
8
doc/RELATIONS.md
Normal file
|
@ -0,0 +1,8 @@
|
||||||
|
Relations and redactions
|
||||||
|
|
||||||
|
events that refer to another event will need support in the SyncWriter, Timeline and SendQueue I think.
|
||||||
|
SyncWriter will need to resolve the related remote id to a [fragmentId, eventIndex] and persist that on the event that relates to some other. Same for SendQueue? If unknown remote id, not much to do. However, once the remote id comes in, how do we handle it correctly? We might need a index on m.relates_to/event_id?
|
||||||
|
|
||||||
|
The timeline can take incoming events from both the SendQueue and SyncWriter, and see if their related to fragmentId/eventIndex is in view, and then update it?
|
||||||
|
|
||||||
|
alternatively, SyncWriter/SendQueue could have a section with updatedEntries apart from newEntries?
|
14
doc/RELEASE.md
Normal file
14
doc/RELEASE.md
Normal file
|
@ -0,0 +1,14 @@
|
||||||
|
release:
|
||||||
|
- bundling css files
|
||||||
|
- bundling javascript
|
||||||
|
- run index.html template for release as opposed to develop version?
|
||||||
|
- make list of all resources needed (images, html page)
|
||||||
|
- create appcache manifest + service worker
|
||||||
|
- create tarball + sign
|
||||||
|
- make gh release with tarball + signature
|
||||||
|
publish:
|
||||||
|
- extract tarball
|
||||||
|
- upload to static website
|
||||||
|
- overwrite index.html
|
||||||
|
- overwrite service worker & appcache manifest
|
||||||
|
- put new version files under /x.x.x
|
143
doc/SENDING.md
143
doc/SENDING.md
|
@ -10,6 +10,7 @@ how will we do local echo?
|
||||||
a special kind of entry? will they be added to the same list?
|
a special kind of entry? will they be added to the same list?
|
||||||
|
|
||||||
how do we store pending events?
|
how do we store pending events?
|
||||||
|
OBSOLETE, see PendingEvent below:
|
||||||
separate store with:
|
separate store with:
|
||||||
roomId
|
roomId
|
||||||
txnId
|
txnId
|
||||||
|
@ -20,16 +21,18 @@ how do we store pending events?
|
||||||
|
|
||||||
// all the fields that might need to be sent to the server when posting a particular kind of event
|
// all the fields that might need to be sent to the server when posting a particular kind of event
|
||||||
PendingEvent
|
PendingEvent
|
||||||
queueOrder
|
queueOrder //is this high enough to
|
||||||
priority //high priority means it also takes precedence over events sent in other rooms ... but how will that scheduling work?
|
priority //high priority means it also takes precedence over events sent in other rooms ... but how will that scheduling work?
|
||||||
txnId
|
txnId
|
||||||
type
|
type
|
||||||
stateKey
|
stateKey
|
||||||
redacts
|
redacts
|
||||||
content
|
content
|
||||||
blobUploadByteOffset: to support resumable uploads?
|
localRelatedId //what's the id? queueOrder? e.g. this would be a local id that this event relates to. We might need an index on it to update the PendingEvent once the related PendingEvent is sent.
|
||||||
blob: a blob that needs to be uploaded and turned into a mxc to put into the content.url field before sending the event
|
blob: a blob that needs to be uploaded and turned into a mxc to put into the content.url field before sending the event
|
||||||
there is also info.thumbnail_url
|
there is also info.thumbnail_url
|
||||||
|
blobMimeType? Or stored as part of blob?
|
||||||
|
//blobUploadByteOffset: to support resumable uploads?
|
||||||
|
|
||||||
so when sending an event, we don't post a whole object, just the content, or a state key and content, or a redacts id.
|
so when sending an event, we don't post a whole object, just the content, or a state key and content, or a redacts id.
|
||||||
however, it's somewhat interesting to pretend an event has the same structure before it is sent, then when it came down from the server, so all the logic can reuse the same structure...
|
however, it's somewhat interesting to pretend an event has the same structure before it is sent, then when it came down from the server, so all the logic can reuse the same structure...
|
||||||
|
@ -57,3 +60,139 @@ we'll need to support some states for the UI:
|
||||||
- sent
|
- sent
|
||||||
|
|
||||||
offline is an external factor ... we probably need to deal with it throughout the app / matrix level in some way ...
|
offline is an external factor ... we probably need to deal with it throughout the app / matrix level in some way ...
|
||||||
|
- we could have callback on room for online/offline that is invoked by session, where they can start sending again?
|
||||||
|
perhaps with a transaction already open on the pending_events store
|
||||||
|
|
||||||
|
|
||||||
|
How could the SendQueue update the timeline? By having an ObservableMap for it's entries in the queue
|
||||||
|
Room
|
||||||
|
SendQueue
|
||||||
|
Timeline
|
||||||
|
|
||||||
|
steps of sending
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
//at some point:
|
||||||
|
// sender is the thing that is shared across rooms to handle rate limiting.
|
||||||
|
const sendQueue = new SendQueue({roomId, hsApi, sender, storage});
|
||||||
|
await sendQueue.load(); //loads the queue?
|
||||||
|
//might need to load members for e2e rooms
|
||||||
|
|
||||||
|
class SendQueue {
|
||||||
|
// when trying to send
|
||||||
|
enqueueEvent(pendingEvent) {
|
||||||
|
// store event
|
||||||
|
// if online and not running send loop
|
||||||
|
// start sending loop
|
||||||
|
}
|
||||||
|
// send loop
|
||||||
|
// findNextPendingEvent comes from memory or store?
|
||||||
|
// if different object then in timeline, how to update timeline thingy?
|
||||||
|
// by entryKey? update it?
|
||||||
|
_sendLoop() {
|
||||||
|
while (let pendingEvent = await findNextPendingEvent()) {
|
||||||
|
pendingEvent.status = QUEUED;
|
||||||
|
try {
|
||||||
|
await this.sender.sendEvent(() => {
|
||||||
|
// callback gets called
|
||||||
|
pendingEvent.status = SENDING;
|
||||||
|
return pendingEvent;
|
||||||
|
});
|
||||||
|
} catch (err) {
|
||||||
|
//offline
|
||||||
|
}
|
||||||
|
pendingEvent.status = SENT;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
resumeSending(online) {
|
||||||
|
// start loop again when back online
|
||||||
|
}
|
||||||
|
|
||||||
|
// on sync, when received an event with transaction_id
|
||||||
|
// the first is the transaction_id,
|
||||||
|
// the second is the storage transaction to modify the pendingevent store if needed
|
||||||
|
receiveRemoteEcho(txnId, txn) {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
// returns entries? to be appended to timeline?
|
||||||
|
// return an ObservableList here? Rather ObservableMap? what ID? queueOrder? that won't be unique over time?
|
||||||
|
|
||||||
|
// wrt to relations and redactions, we will also need the list of current
|
||||||
|
// or we could just do a lookup of the local id to remote once
|
||||||
|
// it's time to send an event ... perhaps we already have the txn open anyways.
|
||||||
|
// so we will need to store the event_id returned from /send...
|
||||||
|
// but by the time it's time to send an event, the one it relates to might already have been
|
||||||
|
// remove from pendingevents?
|
||||||
|
// maybe we should have an index on relatedId or something stored in pendingevents and that way
|
||||||
|
// we can update it once the relatedto event is sent
|
||||||
|
// ok, so we need an index on relatedId, not the full list for anything apart from timeline display? think so ...
|
||||||
|
get entriesMap() {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
class Room {
|
||||||
|
resumeSending(online) {
|
||||||
|
if (online) {
|
||||||
|
this.sendQueue.setOnline(online);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
we were thinking before of having a more lightweight structure to export from timeline, where we only keep a sorted list/set of keys in the collection, and we emit ranges of sorted keys that are either added, updated or removed. we could easily join this with the timeline and values are only stored by the TilesCollection. We do however need to peek into the queue to update local relatedTo ids.
|
||||||
|
|
||||||
|
probably best to keep send queue in memory.
|
||||||
|
so, persistence steps in sending:
|
||||||
|
- get largest queueOrder + 1 as id/new queueOrder
|
||||||
|
- the downside of this that when the last event is sent at the same time as adding a new event it would become an update? but the code paths being separate (receiveRemoteEcho and enqueueEvent) probably prevent this.
|
||||||
|
- persist incoming pending event
|
||||||
|
- update with remote id if relatedId for pending event
|
||||||
|
- update once attachment(s) are sent
|
||||||
|
- send in-memory updates of upload progress through pending event entry
|
||||||
|
- if the media store supports resumable uploads, we *could* also periodically store how much was uploaded already. But the current REST API can't support this.
|
||||||
|
- update once sent (we don't remove here until we've receive remote echo)
|
||||||
|
- store the remote event id so events that will relate to this pending event can get the remote id through getRelateToId()
|
||||||
|
- remove once remote echo is received
|
||||||
|
|
||||||
|
(Pending)EventEntry will need a method getRelateToId() that can return an instance of LocalId or something for unsent events
|
||||||
|
|
||||||
|
if we're not rate limited, we'll want to upload attachments in parallel with sending messages before attachee event.
|
||||||
|
|
||||||
|
so as long as not rate limited, we'd want several queues to send per room
|
||||||
|
|
||||||
|
|
||||||
|
```
|
||||||
|
sender (room 1)
|
||||||
|
---------------------
|
||||||
|
^ ^
|
||||||
|
event1 attachment1
|
||||||
|
^ |
|
||||||
|
event2-------
|
||||||
|
```
|
||||||
|
|
||||||
|
later on we can make this possible, for now we just upload the attachments right before event.
|
||||||
|
|
||||||
|
|
||||||
|
so, we need to write:
|
||||||
|
RateLimitedSender
|
||||||
|
all rate-limited rest api calls go through here so it can coordinate which ones should be prioritized and not
|
||||||
|
do more requests than needed while rate limited. It will have a list of current requests and initially just go from first to last but later could implement prioritizing the current room, events before attachments, ...
|
||||||
|
SendQueue (talks to store, had queue logic) for now will live under timeline as you can't send events for rooms you are not watching? could also live under Room so always available if needed
|
||||||
|
PendingEvent (what the store returns) perhaps doesn't even need a class? can all go in the entry
|
||||||
|
PendingEventEntry (conforms to Entry API)
|
||||||
|
can have static helper functions to create given kind of events
|
||||||
|
PendingEventEntry.stateEvent(type, stateKey, content)
|
||||||
|
PendingEventEntry.event(type, content, {url: file, "info.thumbnail_url": thumb_file})
|
||||||
|
PendingEventEntry.redaction(redacts)
|
||||||
|
PendingEventStore
|
||||||
|
add()
|
||||||
|
maxQueueOrder
|
||||||
|
getAll()
|
||||||
|
get()
|
||||||
|
update()
|
||||||
|
remove()
|
||||||
|
|
2
doc/domexception_mapping.md
Normal file
2
doc/domexception_mapping.md
Normal file
|
@ -0,0 +1,2 @@
|
||||||
|
err.name: explanation
|
||||||
|
DataError: parameters to idb request where invalid
|
161
prototypes/idb-store-files.html
Normal file
161
prototypes/idb-store-files.html
Normal file
|
@ -0,0 +1,161 @@
|
||||||
|
<html>
|
||||||
|
<head>
|
||||||
|
<meta charset="utf-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<ul id="files"></ul>
|
||||||
|
<p>
|
||||||
|
<input type="file" id="file" multiple capture="user" accept="image/*">
|
||||||
|
<button id="addFile">Add</button>
|
||||||
|
<button id="drop">Delete all</button>
|
||||||
|
</p>
|
||||||
|
<script type="text/javascript">
|
||||||
|
|
||||||
|
|
||||||
|
function reqAsPromise(req) {
|
||||||
|
return new Promise((resolve, reject) => {
|
||||||
|
req.onsuccess = () => resolve(req);
|
||||||
|
req.onerror = (err) => reject(err);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function fetchResults(cursor, isDone, resultMapper) {
|
||||||
|
return new Promise((resolve, reject) => {
|
||||||
|
const results = [];
|
||||||
|
cursor.onerror = (event) => {
|
||||||
|
reject(new Error("Query failed: " + event.target.errorCode));
|
||||||
|
};
|
||||||
|
// collect results
|
||||||
|
cursor.onsuccess = (event) => {
|
||||||
|
const cursor = event.target.result;
|
||||||
|
if (!cursor) {
|
||||||
|
resolve(results);
|
||||||
|
return; // end of results
|
||||||
|
}
|
||||||
|
results.push(resultMapper(cursor));
|
||||||
|
if (!isDone(results)) {
|
||||||
|
cursor.continue();
|
||||||
|
} else {
|
||||||
|
resolve(results);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
class Storage {
|
||||||
|
constructor(databaseName) {
|
||||||
|
this._databaseName = databaseName;
|
||||||
|
this._database = null;
|
||||||
|
}
|
||||||
|
|
||||||
|
async open() {
|
||||||
|
const req = window.indexedDB.open(this._databaseName);
|
||||||
|
req.onupgradeneeded = (ev) => {
|
||||||
|
const db = ev.target.result;
|
||||||
|
const oldVersion = ev.oldVersion;
|
||||||
|
this._createStores(db, oldVersion);
|
||||||
|
};
|
||||||
|
await reqAsPromise(req);
|
||||||
|
this._database = req.result;
|
||||||
|
}
|
||||||
|
|
||||||
|
async drop() {
|
||||||
|
if (this._database) {
|
||||||
|
this._database.close();
|
||||||
|
this._database = null;
|
||||||
|
}
|
||||||
|
await reqAsPromise(window.indexedDB.deleteDatabase(this._databaseName));
|
||||||
|
}
|
||||||
|
|
||||||
|
_createStores(db) {
|
||||||
|
db.createObjectStore("files", {keyPath: "id"});
|
||||||
|
}
|
||||||
|
|
||||||
|
async storeFile(file) {
|
||||||
|
const id = Math.floor(Math.random() * 10000000000);
|
||||||
|
console.log(`adding a file as id ${id}`);
|
||||||
|
const tx = this._database.transaction(["files"], "readwrite");
|
||||||
|
const store = tx.objectStore("files");
|
||||||
|
await reqAsPromise(store.add({id, file}));
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
getFiles() {
|
||||||
|
const tx = this._database.transaction(["files"], "readonly");
|
||||||
|
const store = tx.objectStore("files");
|
||||||
|
const cursor = store.openCursor();
|
||||||
|
return fetchResults(cursor,
|
||||||
|
() => false,
|
||||||
|
(cursor) => cursor.value);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function reloadFiles(storage, fileList) {
|
||||||
|
const files = await storage.getFiles();
|
||||||
|
const fileNodes = files.map(f => {
|
||||||
|
const {type, size, name} = f.file;
|
||||||
|
const txt = document.createTextNode(`${f.id} - ${name} of type ${type} - size: ${Math.round(size / 1024, 2)}kb`);
|
||||||
|
const li = document.createElement("li");
|
||||||
|
li.addEventListener("click", async () => {
|
||||||
|
const reader = new FileReader();
|
||||||
|
const promise = new Promise((resolve, reject) => {
|
||||||
|
reader.onload = e => resolve(e.target.result);
|
||||||
|
reader.onerror = e => reject(e.target.error);
|
||||||
|
});
|
||||||
|
reader.readAsArrayBuffer(f.file);
|
||||||
|
try {
|
||||||
|
const buf = await promise;
|
||||||
|
alert(`read blob, len is ${buf.byteLength}`);
|
||||||
|
} catch(e) {
|
||||||
|
alert(e.message);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
li.appendChild(txt);
|
||||||
|
return li;
|
||||||
|
});
|
||||||
|
fileList.innerHTML = "";
|
||||||
|
for(const li of fileNodes) {
|
||||||
|
fileList.appendChild(li);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
let storage = new Storage("idb-store-files-test");
|
||||||
|
await storage.open();
|
||||||
|
|
||||||
|
const fileList = document.getElementById("files");
|
||||||
|
const dropButton = document.getElementById("drop");
|
||||||
|
const addButton = document.getElementById("addFile");
|
||||||
|
const filePicker = document.getElementById("file");
|
||||||
|
addButton.addEventListener("click", async () => {
|
||||||
|
const files = Array.from(filePicker.files);
|
||||||
|
try {
|
||||||
|
for(const file of files) {
|
||||||
|
await storage.storeFile(file);
|
||||||
|
}
|
||||||
|
alert(`stored ${files.length} files!`);
|
||||||
|
reloadFiles(storage, fileList);
|
||||||
|
} catch(e) {
|
||||||
|
alert(e.message);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
dropButton.addEventListener("click", async () => {
|
||||||
|
try {
|
||||||
|
if (storage) {
|
||||||
|
await storage.drop();
|
||||||
|
storage = null;
|
||||||
|
alert("dropped db");
|
||||||
|
}
|
||||||
|
} catch(e) {
|
||||||
|
alert(e.message);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
reloadFiles(storage, fileList);
|
||||||
|
}
|
||||||
|
|
||||||
|
main();
|
||||||
|
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
26
prototypes/online.html
Normal file
26
prototypes/online.html
Normal file
|
@ -0,0 +1,26 @@
|
||||||
|
<html>
|
||||||
|
<head>
|
||||||
|
<meta charset="utf-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<ul id="statuses"></ul>
|
||||||
|
<script type="text/javascript">
|
||||||
|
const list = document.getElementById("statuses");
|
||||||
|
|
||||||
|
function appendOnlineStatus(onLine) {
|
||||||
|
const label = onLine ? "device is now online" : "device is now offline";
|
||||||
|
const txt = document.createTextNode(label);
|
||||||
|
const li = document.createElement("li");
|
||||||
|
li.appendChild(txt);
|
||||||
|
list.appendChild(li);
|
||||||
|
}
|
||||||
|
|
||||||
|
window.addEventListener('offline', () => appendOnlineStatus(false));
|
||||||
|
window.addEventListener('online', () => appendOnlineStatus(true));
|
||||||
|
|
||||||
|
appendOnlineStatus(navigator.onLine);
|
||||||
|
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
Loading…
Reference in a new issue