![](http://content.invisioncic.com/r229491/set_resources_12/84c1e40ea0e759e3f1505eb1788ddf3c_pattern.png)
drompono
-
Posts
8 -
Joined
-
Last visited
Content Type
Blogs
Gallery
Downloads
Events
Profiles
Forums
Articles
Media Demo
Posts posted by drompono
-
-
thanks!
items.append({ 'title': d['content'], 'subtitle': datetime.fromtimestamp(d['creationDate']).strftime('%d.%m.%y - %H:%M'), 'arg': d['localkey'] })
-
Fantastic! Thats working! One last question, the modificationDate in subtitle are in Unix Timestamp like 1587040951.0199919 - How can I convert them to %d.%m.% - %H:%M ?
-
Hi! Im using Simplenote and found out there is a CLI available. Now I want to create a workflow to search through my notes. I´m just starting to use the Script Filter more and still I don´t know how I can use json output from the cli to display the results in alfred.
If I type something with the Script Filter:
/usr/local/bin/sncli -n export {query}
debugging shows me:
ERROR: Simplenote Search[Script Filter] Unable to decode JSON results, top level type is not dictionary in JSON: [ { "tags": [], "deleted": false, "shareURL": "", "publishURL": "", "content": "gmail", "systemTags": [ "markdown" ], "modificationDate": 1599312951.145001, "creationDate": 1569875316, "key": "XXX", "version": 58, "syncdate": 1600667523.891809, "localkey": "XXX", "savedate": 1602487240.5917828 }, { "tags": [], "deleted": 0, "shareURL": "", "publishURL": "", "content": "gmail 2", "systemTags": [ "markdown" ], "modificationDate": 1587040951.0199919, "creationDate": 1587031436.520257, "key": "XXX", "version": 23, "syncdate": 1587154334.194783, "localkey": "XXX", "savedate": 1602487240.5917828 } ]
How is it possible to create an item for each search result, for example with title: content, subtitle: creationDate, argument: localkey?
-
ah ok seems the edit in Numbers App changed the seperators from
,
to
;
so I changed
for row in csv.DictReader(fp, delimiter=';'):
but now I get this error:
[18:02:26.322] ERROR: Search Shopify CSV[Script Filter] Code 1: [fuzzy] .
[fuzzy] cmd=['/usr/bin/python', 'fuzzylist.py'], query='', session_id=None
[fuzzy] running command ['/usr/bin/python', 'fuzzylist.py'] ...
Traceback (most recent call last):
File "fuzzylist.py", line 51, in <module>
convert(INFILE, OUTFILE)
File "fuzzylist.py", line 37, in convert
'title': ITEM_TITLE.format(**row),
UnicodeEncodeError: 'ascii' codec can't encode character u'\u2019' in position 8: ordinal not in range(128)
Traceback (most recent call last):
File "./fuzzy.py", line 380, in <module>
main()
File "./fuzzy.py", line 367, in main
fb = cache.load()
File "./fuzzy.py", line 301, in load
js = check_output(self.cmd)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 223, in check_output
raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command '['/usr/bin/python', 'fuzzylist.py']' returned non-zero exit status 1
Do I have to add .decode ascii somewhere?
Sorry for all the questions
-
yes thats it! Thank you very much, thats exactly what I was looking for!
Ok I will check what will be better for filterung the results
One last question, I replaced the dummy csv with my actual csv list, which is much more lager. I deleted the .json file and triggered the workflow, but then I get this error here:
[17:26:11.560] ERROR: Search Shopify CSV[Script Filter] Code 1: [fuzzy] . [fuzzy] cmd=['/usr/bin/python', 'fuzzylist.py'], query='', session_id=None [fuzzy] running command ['/usr/bin/python', 'fuzzylist.py'] ... Traceback (most recent call last): File "fuzzylist.py", line 51, in <module> convert(INFILE, OUTFILE) File "fuzzylist.py", line 37, in convert 'title': ITEM_TITLE.format(**row), KeyError: 'shipping_address_name' Traceback (most recent call last): File "./fuzzy.py", line 380, in <module> main() File "./fuzzy.py", line 367, in main fb = cache.load() File "./fuzzy.py", line 301, in load js = check_output(self.cmd) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/subprocess.py", line 223, in check_output raise CalledProcessError(retcode, cmd, output=output) subprocess.CalledProcessError: Command '['/usr/bin/python', 'fuzzylist.py']' returned non-zero exit status 1
do you know what could be the reason? some fields are empty or have "(", ä, or other special signs included - could that be the cause?
-
Thanks nfor your quick reply!
Ah ok, sorry here the link to my actual workflow:
https://dsc.cloud/93b597/Search-Shopify-CSV
the included .csv file has 14 rows - row with the fieldname "shipping_address.name" and "total_price" should appear in the alfred json title,
"shipping_address.city" and "shipping_address.country" should appear in the subtitle - But If I do a search from Alfred all rows should be included, so if Im searching for an email address, the email row should be searched - is this possible?
So should I just add to
the for row in reader
row['title'] = row['shipping_address.name'] + row['total_price']
or something like this?
-
Hey there!
Im using the fuzzylist workflow template to display results from an .csv file, this here is the fuzzylist.py:
#!/usr/bin/python from __future__ import print_function theFile="test.csv" fieldnames=["title","arg","subtitle"] json_filename = theFile.split(".")[0]+".json" import csv import sys import json import os def convert(csv_filename, json_filename, fieldnames): f=open(csv_filename, 'r') csv_reader = csv.DictReader(f,fieldnames, restkey=None, restval=None) jsonf = open(json_filename,'w') jsonf.write('{"items":[') data="" for r in csv_reader: r['uid']=r['arg'] data = data+json.dumps(r)+",\n" jsonf.write(data[:-2]) jsonf.write(']}') f.close() jsonf.close() if (not os.path.isfile(json_filename)) or (os.path.getmtime(theFile) > os.path.getmtime(json_filename)) : convert(theFile, json_filename, fieldnames) with open(json_filename, 'r') as fin: print(fin.read(), end="")
it works fine if I have only three rows in the csv file, but what I want is to write the json file like this:
{"items": [ { "title": row[1] + ' ' + row[2], "subtitle": row[4]+row[5]+ row[6] } } ]}
For example if I have 6 rows in my csv file, I want to display row 1 and row2 together in the search result title.
Im sure it belongs somewhere at
data = data+json.dumps(r)+",\n"
but Im not good in Python programming, so I dont know how to add this lines. Can somebody help?
Need help with Python Script Filter and Umlaute (Ä,Ö,Ü)
in Workflow Help & Questions
Posted
Hey there,
I´ve got a running workflow that searches through all my documents on my Paperless-ngx Server, via a python script with requests and the paperless API. Most of the workflow is just shameless copied from this great tutorial here, and it is working without problems. Just one thing I can´t search for documents that have Umlaute like Ü,Ä etc. which makes this workflow almost useless for my german pdf documents
Does anybody know how to fix this? I tried things like sys.argv[1].encode() or # encoding: utf-8 from __future__ import unicode_literals at the top, but without success. In the API documentation it says the search query must be like /api/documents/?query=your%20search%20query, so I guess I need to encode the sys.argv[1] somewhere in the script? Thanks in advance, here is my script so far: