#numpy
HIRING: Senior Machine Learning Engineer / Remote 👉 https://ai-jobs.net/J70929/ #AI #MachineLearning #DataJobs #Jobsearch #MLjobs #bigdata #DataScience #AIjobs #techjobs #DataBricks #biomarkers #jobposting #remotework #numpy #numpy #pandas #numba #torch #tensorflow #jupyter
HIRING: Staff Machine Learning Engineer (Tech Lead) / Remote 👉 https://ai-jobs.net/J70715/ #AI #MachineLearning #DataJobs #Jobsearch #MLjobs #bigdata #DataScience #AIjobs #techjobs #hiringnow #remotejob #biomarkers #OKRs #numpy #pandas #tensorflow #deeplearning #remotework
Julia is the new upstart in the data science world, but can it keep up with the tried and tested combination of Python + NumPy + Numba?
https://towardsdatascience.com/is-julia-faster-than-python-and-numba-897c53694621
#DataScience #deeplearning #MachineLearning #Python #NumPy #Numba #JuliaLang
Why low swappines (10) leads to system crashes? #ram #swap #memoryusage #numpy
Spent the day programming #python for the first time in ages, and on the one hand it’s great how large the ecosystem is and how quickly I can get something up. On the other hand, it didn’t take long before I wished I was back in #RustLang or even #CPlusus..!
The amount of foot guns, trip mines and quicksand was #%^*! Luckily #numpy helped keep me on track for the most part.
Perhaps it’s time to check out using python libraries from rust…
#3GoodThings for today:
1. All my students got their #SDR up and running
2. Sitting on the front porch hacking #Python #DSP #NumPY #SciPy #MatplotLib with a glass of wine
3. Chicken Tikka Masala sitting in the slow cooker for dinner
Fantastic NumPy resource that @melissawm has shown me:
From Python to Numpy by Nicolas P. Rougier
https://www.labri.fr/perso/nrougier/from-python-to-numpy/
I'm surely going to learn a lot and tap into this as I try to write about #Python + #NumPy for "lay people" like myself, in Portuguese.
Dessas loucuras que me acometem de tempos em tempos, hoje eu fiquei com vontade de começar a escrever sobre #NumPy para leigos.
Começando com as minhas imbatíveis credenciais: eu sou totalmente leigo! Acredito que isso pode ajudar um pouco, mas eu tenho um trunfo que é ter pessoas amigas que realmente entendem do assunto, como a incrível @melissawm e que eu acredito que (dado tempo o suficiente) vão me impedir de falar besteiras muito grandes (as besteiras pequenas são inevitáveis).
Você vai poder me acompanhar aqui:
https://hackmd.io/@villares/numpy-para-leigos
#Python in #Excel (in Beta) #Microsoft 🤝 #Anaconda
Default imported libraries:
#matplotlib
#numpy
#pandas
#seaborn
#statsmodels
only for Windows, needs internet access, code executed on MS servers without network or file access
see https://aka.ms/python-in-excel-getting-started & https://www.anaconda.com/excel


A #Python library to write reliable programs that interact with generative models (more like #NumPy than #LangChain)
How decomposing #4D objects into lower-dimensional faces helps to determine intersections and containment: https://onkeypress.blogspot.com/2023/08/hypergeometry-intersections-and.html Part of an ongoing project to extend #raytracing to higher dimensions. #CGI #Python #numpy
Myślicie, że jest wam dzisiaj gorąco?
Najpierw walczyłem z nowym segfaultem w #pydantic 2 z Pythonem 3.12. Nie udało mi się zajść daleko, co najwyżej ustalić, że to #heisenbug.
https://github.com/pydantic/pydantic/issues/7181
Potem testowałem świeży snapshot #LLVM — tylko po to, by odkryć, że testy znów padly na 32-bitowych platformach. Po bisect'cie, okazało się, że przyczyną była zmiana "NFCi" (niezamierzająca zmian funkcjonalnych) do logiki hashowania — wygląda na to, że LLVM drukuje teraz funkcje w kolejności zależnej od platformy.
https://reviews.llvm.org/D158217#4600956
Na koniec walczyłem z segfaultem w testach #trimesh. Najwyraźniej jest to regresja związana z betą #numpy 1.26.0, więc wypracowałem backtrace i zgłosiłem błąd.
You think it's hot today?
First I've been trying to figure out a new segfault in #pydantic 2 with #Python 3.12. I haven't been able to get far, except for establishing that it's a #heisenbug.
https://github.com/pydantic/pydantic/issues/7181
Then I've been testing a fresh #LLVM snapshot — only to discover the test suite is broken on 32-bit platforms again. After a bisect, it turned out that it was caused by a NFCi change to the hash logic — I guess that the function ordering depends on the platform now.
https://reviews.llvm.org/D158217#4600956
Finally, I've been figuring out a #trimesh segfault in tests. Apparently it's specific to #numpy 1.26.0 beta, so I've gotten a backtrace and filed a bug there.
I really liked @t_redactyl's talk about #Python optimization with #Numpy. I thought it was going to be the typical the typical "arrays are quicker than loops and that's it", but I didn't know about broadcasting and really liked the trick with sorting.
I also liked a lot the time taken to explain how lists work on memory vs arrays.
Check it out at https://youtu.be/8Nwk-elxdEQ
Today's #CreativeCoding is some inset rectangle packing. Rather than storing all these as objects as in a traditional packing algorithm, it is evaluating the pixel array as a numpy array, and then comparing the values and making sure they are all the same value
This is really inefficient, and takes more than an hour to generate an image, but this was more my way of learning more #Numpy functions and deepening my #python understanding #py5
Code: https://codeberg.org/TomLarrow/creative-coding-experiments/src/branch/main/x_0095
Why are the "batch" axes always the leading axes in NumPy? I designed all my packages to use the trailing axes as batch axes because this seems more natural to me. Now I'm thinking about switching to NumPy's convention - just to make things more intuitive for NumPy users. Any ideas on that? #python #batch #numpy
HIRING: AI/ML Engineer / Remote (Europe only) 👉 https://ai-jobs.net/J62940/ #AI #MachineLearning #DataJobs #Jobsearch #MLjobs #bigdata #DataScience #AIjobs #techjobs #hiringnow #job #cyberjobs #remotework #TensorFlow #PyTorch #NumPy #WordPress #HuggingFace
@villares@pynews.com.br #numpy vectorization is so magic...
def quadratic_points(ax, ay, bx, by, cx, cy, num_points=None, first_point=False):
if num_points is None:
num_points = int(py5.dist(ax, ay, bx, by) + py5.dist(bx, by, cx, cy) + py5.dist(ax, ay, cx, cy)) // 10
if num_points <= 2:
return [(ax, ay), (cx, cy)] if first_point else [(cx, cy)]
t = np.arange(0 if first_point else 1, num_points + 1) / num_points
x = (1 - t) * (1 - t) * ax + 2 * (1 - t) * t * bx + t * t * cx
y = (1 - t) * (1 - t) * ay + 2 * (1 - t) * t * by + t * t * cy
return np.column_stack((x, y))
The numpy.lib.scimath module provides a workaround for some of the limitations of the math module in Python. It includes a sqrt function that can handle negative numbers and returns complex results for negative inputs. A simple use case would be to calculate the square root of a negative number without raising a ValueError.

`numpy.lib.index_tricks` provides a set of classes and functions to construct arrays with different indices. `set_module` is a function decorator that assigns a module name to a given class or function. #python #numpy #set_module
`

For research projects where I use #NumPy and #MatPlotLib, I actually like using #Jupyter! It is just easier to run code and view my plots.
With #PyQt, you can even have widgets like sliders and other #Qt stuff.
It just speeds up my prototyping and makes me more productive. Naturally, only my plotting code and math exist in the .ipynb, and the rest is just imported from normal .py files. Thus, it allows for quick conversions once the prototyping is done.
Just checked it out!
Is a weird one. Don't think I encountered any place or problem to use it except for mathematical operations, where I would normally just lean towards #Numpy.
But I'll keep it in mind. I like functions like these, which force me to change my solutions to use optimized built-ins rather than my own solutions.
Today I learned that #PyGEOS and #shapely merged in 2021 and that brought ufunc numpy capabilities to shapely.
https://shapely.readthedocs.io/en/stable/ #Python #GIS #GEOS #computationalGeometry #Python #numpy
This is probably useless but, with a bit of #monkeyPatching, one can make Py5Image objects picklable... https://gist.github.com/villares/3e00c5c4e3366b18ebadb9073e46c6d1
@lsmith You can save a bit of memory by using tuples instead of lists and NamepdTuple instead of dicts (if you know in advance the keys of dicts).
If you can define the data structure more clearly, #numpy or #pandas will save even more memory for you. They now use pyarrow [1] inside which is very good at packing data.
There is another quirk, not to say, an issue, in AWS Glue. In Python shell jobs, when using Python 3.9, pip is not aware of the pre-built analytics libraries when installing an external wheel. As a result, it resolves and downloads from the package registry even if the pre-built versions of pandas or numpy satisfy the wheel’s requirements.
#Python #AWS #AWSGlue #bug #pandas #numpy
https://docs.aws.amazon.com/glue/latest/dg/add-job-python.html
Sometimes I go into a #CreativeCoding session wanting to make something specific. Other times I wonder what will happen if I try something weird
This is one of those. I use #Py5 to draw short lines on the screen, then capture the pixel array in a #NumPy array. Then I use the NumPy command roll to literally roll those pixels values around the array writing them back to the canvas when they are in different positions. This is the result after thousands of positions
Code https://codeberg.org/TomLarrow/creative-coding-experiments/src/branch/main/x_0084
@DanielaKEngert @sentientmortal @fell @_alen
As a more serious note: As discussed by various people in various other posts, the performance of a program depends on so many things, the language itself is only one of many factors. And in reality a non-trivial software is often a combination of multiple programming languages anyway, like #python for complex high-levell code, and #cpp for specific performance-critical code. As done with #numpy or #tensorflow.
#Cython 3.0 is (almost) out !
(binaries are being deployed right now, so it should be available in a couple hours)
https://github.com/cython/cython/releases/tag/3.0.0
This is exiting news for our ecosystem because it's the first stable version able to *not* generate deprecated C-API #numpy code. When it's widely adopted, Numpy devs will finally be able to move forward with performance optimisations that were not possible without breaking everyone's favourite package !
(Learning) Machine Learning
I have been studying machine learning recently and trying out different tools for it, here are some observations about IDEs in relation with ML (on Windows):
1. No IDE, just using the Command Prompt terminal and web browsers. That's the one I like less. The command prompt terminal keeps forgetting all the commands I used in the previous sessions. There must be some workarounds for that but I don't care enough to look for them. Also, the default UI for Jupyter Notebook is pretty crappy, gotta install and set up some themes.
2. VS Code. This one I like the most. It works well but I keep getting some bogus warnings about libraries not being imported properly, but it works anyways so... 🤷♂️ Also the UI in VS code is all right I find.
3. PyCharm. The Community Edition only allows to view the notebooks in Read-Only mode. There is no way I'm paying for the Pro version so that's it for that one.
What are your favourite tools?

The point of these exercises is to point out that it's too simplistic to simply say that programming lanuage X is better or faster than programming language Y.
Last time I had such discussions was as teenager.😉
Typically you use and combine various programming languages, e.g. #python (for high-level code) with #cpp (for performance critical low-level code). Textbook sample: #numpy. Or #tensorflow.
@fell In fact, another run looks like this.
But as a serious note: In my experience, in most #python programs the bottleneck is usually not the #python code itself. I'm a big fan of combining #python for most of the application combined with #cpp for performance critical code, which is usually much less code than you might think at first, and which you usually don't have to program anyway. For example, #numpy is implemented in #c/#cpp.
If you use #NumPy, upper bound your dependencies to <2.0 now.
Also, as of 1.25, you no longer need to use oldest-supported-numpy in your builds.
It still feels like magic to me that #NumPy allows slices that aren't "rectangular". They can be arbitrary index lists, which in the simplest form allows you to "draw" polygons by first selecting the pixels and then making a single assignment. That's what ski.draw.polygon does.
But it goes further, you can have really arbitrary lists of multi-dimensional indexes so they don't have to be continuous or anything. My simple high-school math mind can barely accept it 🤯
This year is very intensive in terms of migrations for scientific Python maintainers:
- a new #CPython (3.12) in October
- a major version of #Cython (3.0) announced for this summer
- a major version of #Numpy (2.0) scheduled for the end of the year
All of which will require careful testing and probably take some time to propagate through the giant mess that are dependency trees in scientific #Python 😵💫
#Processing #Python #py5 #NumPy #ConwaySGameOfLife variation https://github.com/villares/sketch-a-day/tree/main/2023/sketch_2023_07_08 #CellularAutomata #CreativeCoding
[Sorry for the bad screen capture missing frames :(]
So what's the easiest way to handle time and date data with timezone information in #python (#pandas, #datetime, #numpy, or #xarray). I find myself switching back and forth between datetime64, Timestamp, adding timedelta or tzinfo haphazardly and have never really settled on what's the best way to handle these data. I'm primarily working with pandas dataframes or xarray datasets. #programmingHelp
Hello I'm Pekka and I do experiments in computer graphics, video art, and machine learning. I also read a lot. I mostly post about my hobby projects (or #books I'm reading).
At the moment I'm working on #homebrew #N64 stuff and making some mashups tracks. Occasionally also #demoscene coding with or without #shaders :)
@TomLarrow, this is what I wanted to do and show you yesterday but didn't have the time:
https://github.com/villares/sketch-a-day/blob/main/2023/sketch_2023_06_24/sketch_2023_06_24.py #py5 #Python
It is a clever #NumPy masking strategy I learned from @hx2A that allows one to benefit from a transparent background (or even translucent objects) in the offscreen buffer image you draw before the clip/mask. In this case it just saves you from drawing a "visible circle region" but if you have lots of objects, or translucent objects, it can be very handy.
Cheers @yorik thanks for you work on @FreeCAD !
Dear @hx2A thanks for your work on @py5coding !
Oi @melissawm thank you for your work on #numpy !
The extraction seems to be working really well, but can't tell how well yet
Has two repeating sections. One is very clearly good data that looks as expected
The other looks like noise, but is very repeatable and (partially) points in a way that true noise wouldn't point. Also partially larger than the real effect could be
What I really need is more example data. Fortunately bosses/partners are listening enough to make that happen (maybe)
I haven't logged in to #StackOverflow in years, but I had to upvote this amazingly elegant answer
https://stackoverflow.com/a/33920320/678832
And it isn't just a good idea--it actually works!
gostei mais da mistura de hoje...
#Processing #Python #py5 #numpy #CreativeCoding (atualização: animação comprimiu mal, vou por uma imagem estática que dá pra ter uma ideia)
import py5
import numpy as np
def setup():
global npa, R, G, B
py5.size(600, 600)
npa = np.empty((py5.width, py5.height)).T
npa.fill(100)
R = dist_to_pos(py5.width, py5.height, 300, 300)
G = dist_to_pos(py5.width, py5.height, 200, 300)
B = dist_to_pos(py5.width, py5.height, 300, 200)
def draw():
rnd_r = np.random.randint(0, 300, size=(py5.width, py5.height)).T
rnd_g = np.random.randint(0, 300, size=(py5.width, py5.height)).T
rnd_b = np.random.randint(0, 300, size=(py5.width, py5.height)).T
img = np.dstack([R < rnd_r, G < rnd_g, B < rnd_b])
py5.set_np_pixels(img * 150, 'RGB')
def dist_to_pos(width, height, cx, cy):
""" reeturns a 2D array filled with distances """
x = np.arange(width)
y = np.arange(height)
xx, yy = np.meshgrid(x, y)
return np.linalg.norm(np.array([xx - cx, yy - cy]), axis=0)
py5.run_sketch()
sketch de ontem #Processing #Python #py5 #numpy https://abav.lugaralgum.com/sketch-a-day
Today I updated https://abav.lugaralgum.com/sketch-a-day that was stuck a few days behind... today's sketch...
sketch_2022_05_30 #Processing #Python #py5 #numpy, same bitwise pattern strategy as yesterday's, learned from Naoki Tsutae.
#CreativeCoding #Processing #Python # This pattern strategy I learned from Naoki Tsutae
# https://openprocessing.org/user/154720?view=sketches&o=48#sk
import numpy as np #numpy
import py5 #py5
order = 500
power = 59
def setup():
global color_map, x, y
py5.size(1000, 1000)
py5.no_smooth()
color_map = np.array([
[py5.red(hsb(i)), py5.green(hsb(i)), py5.blue(hsb(i))]
for i in range(256)])
x, y = np.meshgrid(np.arange(0, order), np.arange(0, order))
def draw():
py5.background(0)
pattern = func(x, y)
img = py5.create_image_from_numpy(color_map[pattern], 'RGB')
py5.image(img, 0, 0, py5.width, py5.height)
def hsb(h, sat=255, bri=255):
py5.color_mode(py5.HSB)
return py5.color(h, sat, bri)
@np.vectorize
def func(x, y):
return int((x ^ y) ** (power / 10)) % 256
def key_pressed():
global power
if py5.key_code == py5.UP:
power += 1
elif py5.key_code == py5.DOWN:
power = max(power - 1, 1)
elif py5.key == 's':
py5.save_frame(f'out{order}-{power}.png')
print(power)
py5.run_sketch(block=False)
I really like @bitartbot an I always wanted to try some of the patterns with #py5 (#Processing + #Python), also #numpy vectorization seemed cool to try...
#Processing #Python #numpy #shapely #py5 #sketchAday for 24th and 25th May
sketch_2022_05_22 #Processing #Python #py5 #numpy https://abav.lugaralgum.com/sketch-a-day
Water is maybe a tiny bit faster, but still very slow. I added text for the current material and I can pickle the numpy array and load it back.
Will I ever get a good #numpy intuition?
I was making some clumsy left-right comparisons and swaps with grid.T[1:] vs. grid.T[:-1] and it worked. Now I tried grid vs. np.roll(grid, ...) and I'm struggling to make the masks for the swap work :(
I made an object recognition program in #python and the dataset seemed more suitable to recognize stuffs on the street so that’s where I end up. Great for educational purpose but definitely makes me think about the ethical abuses of #ai today and in a near future. #computervision #opencv #numpy #tensorflow #classification
sketch_2023_05_21 #Processing #Python #py5 #numpy #creativeCoding
My naive sandbox now has, beside sand, concrete, water and rock. Water is very slow... but OK. All under 100 lines of code!
https://github.com/villares/sketch-a-day/blob/main/2023/sketch_2023_05_21/sketch_2023_05_21.py
sketch_2022_05_20 update!
I couldn't resist fiddling a bit more, and it came out more realistic!
Code at https://abav.lugaralgum.com/sketch-a-day
#Python #numpy #imageProcessing I have done this before but I can't remember how and I'm lazy:
Say you have a table/dict of ints to colors like this palette = {
1: rgb(100, 0, 200),
2: rgb(200, 100, 0),
3: rgb(0, 200, 100),
...}
# it could be just an array of tuples or a 2D array maybe... [[100, 0, 200)], [200, 100, 0], ...]
And I have another 2D array of ints that I want to convert to a stack of RGB arrays so as to make an image of them (with #Pillow, or in my case #py5)
Which would be an elegant way of doing it?
(Writing this I had an idea, but I have to get out to my weekend shift at Sesc... maybe I'll try it later)
cc @TomLarrow