RSS-канал «Python»
Доступ к архиву новостей RSS-канала возможен только после подписки.
Как подписчик, вы получите в своё распоряжение бесплатный веб-агрегатор новостей доступный с любого компьютера в котором сможете просматривать и группировать каналы на свой вкус. А, так же, указывать какие из каналов вы захотите читать на вебе, а какие получать по электронной почте.
Подписаться на другой RSS-канал, зная только его адрес или адрес сайта.
Код формы подписки на этот канал для вашего сайта:
Последние новости
TypeIs does what I thought TypeGuard would do in Python
2024-04-28 06:32 /u/Ok_Analysis_4910
While it's unfortunate to have two constructs—TypeGuard and TypeIs—with slightly different behaviors, I'm glad that the latter is less surprising.
[link] [comments]
Framework/Tool to build a simple app connected to snowflake
2024-04-28 05:52 /u/bolt_runner
I have use cases for a simple app that gives user access to snowflake data.
Use cases involve:
- Exposing data to users, this data can be a query, view or a table
- Exposing a table to the users and allowing them to update values in it
Tried doing this using streamlet on snowflake where the user can directly access the app using their snowflake user account, but it had a lot of limitations and didn't serve the requirement
Instead, I created a Native Streamlit app deployed on a VM, using Snowpark as the backend. This setup achieved my goal, but it is not optimal I'm curious if there is a better solution that can handle multiple use cases or apps.
[link] [comments]
Is Python finished?
2024-04-28 05:16 /u/Both-Purple-1515
I was teaching myself Python. What should I do now?
Context: "Google lays off Python Team"
Does AI just write python now?
[link] [comments]
Share Proejct: NLLB-200 Distill 350M en-ko
2024-04-28 04:39 /u/SaeChan5
I'm excited to share a project that was initially intended to use in my graduating product(Capstone)
What My Proeject Does
I made NLLB-200 Distill 350M model to translating English to Korean
Target Audience
GPU servers are quite expensive, so I made it for university students who can't cost the server (like me.)
Comparison
It's even smaller and faster the other NLLB-200 model. so it can be run with CPU!
more details are in my page
If you know Korean, please give me a lot of feedback
https://github.com/newfull5/NLLB-200-Distilled-350M-en-ko
thank you!!
[link] [comments]
Sunday Daily Thread: What's everyone working on this week?
2024-04-28 03:00 /u/AutoModerator
Weekly Thread: What's Everyone Working On This Week? ️
Hello /r/Python! It's time to share what you've been working on! Whether it's a work-in-progress, a completed masterpiece, or just a rough idea, let us know what you're up to!
How it Works:
- Show & Tell: Share your current projects, completed works, or future ideas.
- Discuss: Get feedback, find collaborators, or just chat about your project.
- Inspire: Your project might inspire someone else, just as you might get inspired here.
Guidelines:
- Feel free to include as many details as you'd like. Code snippets, screenshots, and links are all welcome.
- Whether it's your job, your hobby, or your passion project, all Python-related work is welcome here.
Example Shares:
- Machine Learning Model: Working on a ML model to predict stock prices. Just cracked a 90% accuracy rate!
- Web Scraping: Built a script to scrape and analyze news articles. It's helped me understand media bias better.
- Automation: Automated my home lighting with Python and Raspberry Pi. My life has never been easier!
Let's build and grow together! Share your journey and learn from others. Happy coding!
[link] [comments]
Are PEP 744 goals very modest?
2024-04-28 00:02 /u/MrMrsPotts
Pypy has been able to speee up pure python code by a factor of 5 or more for a number of years. The only disadvantage it has is the difficulty in handling non python mistakes which are very commonly used in practice.
https://peps.python.org/pep-0744 seems to be talking about speed ups of 5-10%. Why are the goals so much more modest than what pypy can already achieve?
[link] [comments]
ArchiveFile: Unified interface for tar, zip, sevenzip, and rar files
2024-04-27 23:14 /u/PredatorOwl
What My Project Does
archivefile
is a wrapper around tarfile
, zipfile
, py7zr
, and rarfile
. The above libraries are excellent when you are dealing with a single archive format but things quickly get annoying when you have a bunch of mixed archives such as .zip
, .7z
, .cbr
, .tar.gz
, etc because each library has a slightly different syntax and quirks which you need to deal with. archivefile
wraps the common methods from the above libraries to provide a unified interface that takes care of said differences under the hood. However, it's not as powerful as the libraries it wraps due to lack of support for features that are unique to a specific archive format and library.
Target audience
Anyone who's using python to deal with different archive formats
Comparison
- ZipFile, TarFile, RarFile, and py7zr - These are libraries that mine wraps since each of them can only deal with a single archive format
- shutil - Shutil can only deal with zipfile and tarfile and only allows full packing or full extraction.
- patool - Excellent library that deals with wider range of formats than mine but in doing so it provides less granular control over each ArchiveFile falls somewhere between the powerful dedicated library and the far less powerful universal libaries. #### Links Repository: https://github.com/Ravencentric/archivefile Docs: https://ravencentric.github.io/archivefile
[link] [comments]
Pure Python Physics Engine
2024-04-27 22:24 /u/More-Tower9993
What My Project Does The Physics Engine Called PhysEng, provides an easy to use environment and visualization combo in which to try out different physics or even provide a template to design your own accelleration/velocity fields. Besides the visualization aspect and numpy the basic functions of the Engine are written completely in 100% python. The features included in the Engine are:
- Particles, Soft Bodies, Anchor points
- Built in Fields: Drag, Uniform Force Fields, Gravity Between Particles, Octree Gravity etc
- Make your own: There are standard templates included in the Examples to design your own fields
- Springs - Construct Soft Bodies using Springs. (Built in soft bodies: Cloth and ball
Target Audience PhysEng is made for people who just want to try out different simple simulations or want to design their own physics.
Comparison Looking through github I could never really find a simple and easy-to-use library that did not require me to install some weird libraries or that felt like it was hiding some process from me through using packages. This package is a solution to this since everything is written in python nothing is a secret and can be directed easily.
Get PhysEng There is full documentation available in the Github repo: https://github.com/levi2234/PhysEng
[link] [comments]
Cross platform python3 shebang
2024-04-27 21:44 /u/tedkotz
There is no shebang line that actually works across platforms for python 3.
I would like one that works on unmodified :
- Debian shell (Dropped python2, falls under PEP 394)
- Older Linux shells that still have
python
pointing to python2 (PEP 394) - Windows
cmd.exe
shell (this really just means one that will work with PEP 397) - Gitbash for Windows (sort of a weird half sibling that respects shebangs)
The best work around I have found is:
- use
#!/usr/bin/env python3
- on Windows copy
python.exe
topython3.exe
- Then make sure both are in your path for unix-like shells.
- Debian make sure
python-is-python2
orpython-is-python3
is installed, in case you come upon a#!/usr/bin/env python
.
As Windows adopts more and more Unix-like behavior, and more distros drop python2, having completely different "portability" rules is going to become a larger problem.
A significant compatibility enhancement would be if the official python packages for Windows just included a python3.exe
to comply with PEP 394. This could be a copy of python.exe
like my workaround, or one could be a minimal executable that just hands off to the other or to py
.
An alternative would be adding py
and pyw
from PEP 397 to PEP 394. and having people move to the shebang #!/usr/bin/env py -3
.
The belt and suspenders compatibility approach is all platforms should have a py
, pyw
, and python3
executable that can launch python3 scripts if requested. And python
should be an executable than runs some version of python.
I am curious what others are using out there? Do others launch python scripts from inside gitbash? do you have a seperate window for running the script and git actions? Are you manually choosing the python executable on the command line?
[link] [comments]
I made Tkinter "DevTools" to inspect and modify widgets in your running app in real-time
2024-04-27 20:07 /u/254hypebeast
What my project does
Formation debugger is a tool in Formation-studio (a cool drag-n-drop Tkinter GUI designer also made by me) that allows one to hook into running Tkinter applications, inspect the widget hierarchy, and modify properties and layouts all in real-time. It even includes a console allowing you to trigger events, monitor logs and just about anything you can think of doing. To put it simply, it's sort off like Chrome DevTools
but for Tkinter.
Target Audience
- Tkinter beginners who want to learn how the framework works through experimentation
- Tkinter experts trying to figure out why a widget deep in the hierarchy of a complex application isn't acting right
- Tkinter developers who are tired of having to modify and run code a million times just to get everything looking right
Comparison
I don't think there exists anything else that does this but correct me if I'm wrong.
Usage
It comes bundled with Formation Studio so the installation is as simple as
shell pip install formation-studio
To use it use the command
shell formation-dbg /path/to/your/tk/app.py
In the embedded python REPL console you can access a simple debugger API as follows:
```python
Access all widgets currently selected
widgets = debugger.selected
Access the root widget usually a Tk object
root = debugger.root ```
It is still a work in progress and may be a little buggy.
Technical details
Since the debugger has to hook into an arbitrary application without interfering with it, the debugger UI, also written in Tkinter, runs on a separate process. There is therefore tons of IPC to be conducted to get the whole thing working. The code on this is new and may be unclean but still interesting to look at. The code for this tool is under studio/debugtools
in the Formation studio source.
[link] [comments]
In what way do you try out small things when developing?
2024-04-27 18:03 /u/HatWithAChat
I've noticed at work that my coworkers and I try out small things in different ways. Small things like if you want to try that adding two datetimes together behaves in the way you expect. Some people use jupyter notebook for this and others run python interactively in a separate command prompt.
I usually run debug in whatever IDE I'm using and then letting it stop at the code I'm currently developing and then using the debug console to test things out. Sometimes this means just leaving the debugger at a breakpoint for half an hour while I continue writing code. Is my way of doing it weird or does it have any disadvantages? How do you usually test out things on the go in a good way?
[link] [comments]
I made an easy and secure data lake for Pandas
2024-04-27 15:40 /u/realstoned
What My Project Does Shoots is essentially a "data lake" where you can easily store pandas dataframes, and retrieve them later or from different locations or in different tools. Shoots has a client and a server. After choosing a place to run the server, you can easily use the client to "put" and "get" dataframes. Shoots supports SQL, allowing you to put very large dataframes, and then use a query to only get a subset. Shoots also allows you to resample on the server.
```python
put a dataframe, uploads it to the server
df = pd.read_csv('sensor_data.csv')
shoots.put("sensor_data", dataframe=df, mode=PutMode.REPLACE)
retrieve the whole data frame
df0 = shoots.get("sensor_data")
print(df0)
or use sql to retrieve just some of the data
sql = 'select "Sensor_1" from sensor_data where "Sensor_2" < .2'
df1 = shoots.get("sensor_data", sql=sql) ```
Target Audience Shoots is designed to be used in production by data scientists and other python devs using pandas. The server is configurable to run in various settings, including locally on a laptop if desired. It is useful for anyone who wants to share dataframes, or store dataframes so they can be easily accessed from different sources.
Comparison To my knowledge, Shoots is the only data lake with a client that is 100% pandas native. The get() method returns pandas dataframes natively, so there is no cumbersome translations such as required from typical databases and data lakes. The server is build on top of Apache Arrow Flight, and is very efficient with storage because it uses Parquet as the storage format natively. While the Shoots client does all of the heavy listing, if desired, the server can be accessed with any Apache Flight client library, so other languages are supported by the server.
Get Shoots There is full documentation available in the Github repo: https://github.com/rickspencer3/shoots
It is packaged for Pypi as well: (https://pypi.org/project/shoots/) ```pip install shoots"
[link] [comments]
RDD lookup operation performing weirdly
2024-04-27 14:44 /u/unbiased_crook
Hey there,
I'm currently exploring PySpark and attempting to implement Dijkstra's algorithm using it. However, my query doesn't pertain to the algorithm itself; it's regarding the unexpected behavior of the lookup operation in PySpark. To aid in understanding what's happening, I've added print statements and comments throughout the code. This issue seems to be independent of Dijkstra's algorithm, so even if you're not familiar with it, your insights could still be valuable.
Here is my code:
def dijkstra(graph_dict, source_node): sc = spark.sparkContext # Initialize distances with infinity for all nodes distances = sc.parallelize([(node, float('inf')) for node in graph_dict]) # Set the distance of the source node to 0 distances = distances.map(lambda x: (x[0], 0) if x[0] == source_node else x) print(distances.collectAsMap()) # Initialize priority queue with source node and distance 0 pq = sc.parallelize([(0, source_node)]) while not pq.isEmpty(): # Get the node with minimum distance from the priority queue current_distance, current_node = pq.min() print(f'Current Node: {current_node}; Current Distance: {current_distance}') # Remove the current node from the priority queue pq = pq.filter(lambda x: x != (current_distance, current_node)) # Iterate over neighbors of the current node for neighbour, weight in graph_dict[current_node].items(): print(f'neighbour: {neighbour}; weight: {weight}') # Calculate the distance to the neighbor through the current node distance = current_distance + weight print(f'Distance: {distance}') # Lookup the current distance to the neighbor lookup_result = distances.lookup(neighbour)[0] print(f'lookup result for neighbour {neighbour}: {lookup_result}') # If the new distance is smaller, update the distance and add to priority queue if distance < lookup_result: print('True') distances = distances.map(lambda x: (x[0], distance) if x[0] == neighbour else x) pq = pq.union(sc.parallelize([(distance, neighbour)])) return distances.collectAsMap()
The input graph_dict is a python dictionary looks like this:
{'0': {'1': 7, '2': 1, '3': 4}, '1': {'0': 7, '2': 1, '3': 1}, '2': {'0': 1, '1': 1, '3': 10}, '3': {'0': 4, '1': 1, '2': 10}}
Function is called as:
dijkstra(graph_dict, '0')
And this is the output (logs), I am getting when running it:
{'0': 0, '1': inf, '2': inf, '3': inf} Current Node: 0; Current Distance: 0 neighbour: 1; weight: 7 Distance: 7 lookup result for neighbour 1: inf True neighbour: 2; weight: 1 Distance: 1 lookup result for neighbour 2: 1 neighbour: 3; weight: 4 Distance: 4 lookup result for neighbour 3: 4 Current Node: 1; Current Distance: 7 neighbour: 0; weight: 7 Distance: 14 lookup result for neighbour 0: 14 neighbour: 2; weight: 1 Distance: 8 lookup result for neighbour 2: 8 neighbour: 3; weight: 1 Distance: 8 lookup result for neighbour 3: 8 {'0': 0, '1': inf, '2': inf, '3': 8}
From the output, whats weird, is that the lookup result for neighbour 2 should be 'inf' and not 1 and same for all other neighbours. Only for the first time, that is for neighbour 1, its correct as 'inf'.
[link] [comments]
milkcow - First package/library
2024-04-27 14:35 /u/Samuel_G_Reynoso
Excited to share milkcow, my first python package. I'd love any feedback, and to continue to build out the parts of this package that show potential.
https://pypi.org/project/milkcow/ https://github.com/SamReynoso/milkcow
What MilkCow Does
Milkcow automates database creation and offers in-memory key-value mapping for data handling. Whether you're building middleware, local storage, or multiprocessing scripts.
Target Audience
MilkCow is designed for developers looking to streamline the development process. It caters to those who want to simplify data.
Comparison
Milkcow aims for simplicity. Milkcow offers a way for making it easier for developers to get started. Additional functionalities, including database creation and the in-memory datastore, enhancing its usability.
``` from milkcow import ObjectCow
oc = ObjectCow(Record) oc.push('Bob', records) objs = oc.new('Bob') k, v = oc.items() for k in oc.keys() new = oc.new(k) ```
``` from milkcow import MilkCow
mc = MilkCow(Record) mc.pull('Bob') mc.push('Alice', list[Record]) sender = mc.sender.new_sender() sender = mc.sender.all_sender() sender = mc.sender.keyed_sender('Alice') sender.send() ```
[link] [comments]
While creating a webscrapping project that scrapes job listing of simply hired.
2024-04-27 14:35 /u/Slayerma
What should I do? It is showing 403 error and the card present is showing 0 where there 112 jobs listed don't know why this is happening
[link] [comments]
Anyone know the answer to this?
2024-04-27 14:15 /u/DoubleBandicoot7775
It’s really bothering me
[link] [comments]
ASCII plot backend package for matplotlib
2024-04-27 11:06 /u/jetpack_away
Hi
I've made a package called mpl_ascii which is a backend for matplotlib. You can find it here: https://github.com/chriscave/mpl_ascii
I would love to share it with others and see what you guys think
What it is
It is a backend for matplotlib that converts your plots into ASCII characters.
At the moment I have only made support for: bar charts, scatter plots and line plots but if there's demand for more then I would love to keep working on it.
Target Audience:
Anyone using matplotlib to create plots who might also want to track how their plots change with their codebase (i.e. version control).
Comparison:
There are a few plotting libraries that produce ASCII plots but I have only come across this one that is a backend for matplotlib: https://github.com/gooofy/drawilleplot. I think it's a great package and it is really clever code but I found it a little lacking when you have multiple colours in a plot. Let me know if you know of other matploblib backends that does similar things.
Use case:
A use case I can think of is for version controlling your plots. Having your plot as a txt format means it can be much easier to see the diff and the files you are committing are much smaller.
Since it is only a backend to matplotlib then you only need to switch to it and you don't need to recreate your plots in a different plotting library.
Thanks for reading and let me know what you think! :)
[link] [comments]
Ideas required for a dataset I've gathered.
2024-04-27 09:26 /u/Albert_AG
I have a dataset of reddit cross posts I've gathered.
Link:
https://www.kaggle.com/datasets/albertg30/reddit-cross-posts-dataset
I need some interesting & challenging project/research ideas I can do using this.
I'll note any idea! Thank you!
[link] [comments]
American Airlines scraper made in Python with only http requests
2024-04-27 03:54 /u/JohnBalvin
Hello wonderful community,
Today I'll present to you pyaair, a scraper made pure on Python https://github.com/johnbalvin/pyaair
Easy instalation
` ` `pip install pyaair ` ` `
Easy Usage
` ` ` airports=pyaair.airports("miami","") ` ` `
Always remember, only use selenium, puppeteer, playwright etc when it's strictly necesary
Let me know what you think,
thanks
About me:
I'm full stack developer specialized on web scraping and backend, with 6-7 years of experience
[link] [comments]
Sensor-App: A Sensor Data Displaying/Streaming Android App written in Python
2024-04-25 21:14 /u/StoneSteel_1
Sensor-App is an Android App that's main focus is to help create a real-time mobile sensor data stream for computer applications, data collection, AR, VR, etc.
Github: SensorApp
Features of Sensor-App
- Real-Time Sensor Data display
- Faster Real-Time Sensor Data Streaming via TCP Sockets
- Simple and Easy setup of Data Streaming Server
What my Project Does
My project is aimed to help provide a Real-Time Mobile Sensor data streaming service.
Target Audience
Computer Programmers, Data Scientists, AR and VR enthusiasts
Remarks
- This Application was made with help of Beeware Tools.
- I made this Application to test out abilities of Beeware Tools, and get Experience in Android App develpoment with python
- Currently the Project only contains Accelerometer, and I will update it soon to support other sensors too.
- I am always open hear advice, constructive criticism about my project.
- I would like to hear your opinion of my project :}
Thanks for Reading, hope you try out my project :)
[link] [comments]